00:00:00.001 Started by upstream project "autotest-nightly" build number 4250 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3613 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.066 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.450 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.461 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.471 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.471 > git config core.sparsecheckout # timeout=10 00:00:04.481 > git read-tree -mu HEAD # timeout=10 00:00:04.500 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.518 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.518 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.619 [Pipeline] Start of Pipeline 00:00:04.628 [Pipeline] library 00:00:04.630 Loading library shm_lib@master 00:00:04.630 Library shm_lib@master is cached. Copying from home. 00:00:04.642 [Pipeline] node 00:00:04.653 Running on WFP43 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.655 [Pipeline] { 00:00:04.663 [Pipeline] catchError 00:00:04.664 [Pipeline] { 00:00:04.674 [Pipeline] wrap 00:00:04.680 [Pipeline] { 00:00:04.688 [Pipeline] stage 00:00:04.689 [Pipeline] { (Prologue) 00:00:04.893 [Pipeline] sh 00:00:05.178 + logger -p user.info -t JENKINS-CI 00:00:05.195 [Pipeline] echo 00:00:05.197 Node: WFP43 00:00:05.204 [Pipeline] sh 00:00:05.503 [Pipeline] setCustomBuildProperty 00:00:05.514 [Pipeline] echo 00:00:05.516 Cleanup processes 00:00:05.521 [Pipeline] sh 00:00:05.804 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.804 2857741 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.815 [Pipeline] sh 00:00:06.102 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.103 ++ grep -v 'sudo pgrep' 00:00:06.103 ++ awk '{print $1}' 00:00:06.103 + sudo kill -9 00:00:06.103 + true 00:00:06.111 [Pipeline] cleanWs 00:00:06.118 [WS-CLEANUP] Deleting project workspace... 00:00:06.118 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.124 [WS-CLEANUP] done 00:00:06.127 [Pipeline] setCustomBuildProperty 00:00:06.135 [Pipeline] sh 00:00:06.413 + sudo git config --global --replace-all safe.directory '*' 00:00:06.490 [Pipeline] httpRequest 00:00:07.082 [Pipeline] echo 00:00:07.084 Sorcerer 10.211.164.101 is alive 00:00:07.092 [Pipeline] retry 00:00:07.094 [Pipeline] { 00:00:07.106 [Pipeline] httpRequest 00:00:07.110 HttpMethod: GET 00:00:07.111 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.111 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.128 Response Code: HTTP/1.1 200 OK 00:00:07.129 Success: Status code 200 is in the accepted range: 200,404 00:00:07.129 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.508 [Pipeline] } 00:00:10.525 [Pipeline] // retry 00:00:10.532 [Pipeline] sh 00:00:10.812 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.826 [Pipeline] httpRequest 00:00:11.543 [Pipeline] echo 00:00:11.545 Sorcerer 10.211.164.101 is alive 00:00:11.555 [Pipeline] retry 00:00:11.557 [Pipeline] { 00:00:11.571 [Pipeline] httpRequest 00:00:11.576 HttpMethod: GET 00:00:11.576 URL: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:11.577 Sending request to url: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:11.597 Response Code: HTTP/1.1 200 OK 00:00:11.598 Success: Status code 200 is in the accepted range: 200,404 00:00:11.598 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:54.334 [Pipeline] } 00:01:54.351 [Pipeline] // retry 00:01:54.359 [Pipeline] sh 00:01:54.643 + tar --no-same-owner -xf spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:57.941 [Pipeline] sh 00:01:58.224 + git -C spdk log --oneline -n5 00:01:58.224 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:58.224 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:58.224 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:58.224 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:58.224 cc533a3e5 nvme/nvme: Factor out submit_request function 00:01:58.235 [Pipeline] } 00:01:58.248 [Pipeline] // stage 00:01:58.257 [Pipeline] stage 00:01:58.259 [Pipeline] { (Prepare) 00:01:58.275 [Pipeline] writeFile 00:01:58.290 [Pipeline] sh 00:01:58.575 + logger -p user.info -t JENKINS-CI 00:01:58.587 [Pipeline] sh 00:01:58.870 + logger -p user.info -t JENKINS-CI 00:01:58.881 [Pipeline] sh 00:01:59.164 + cat autorun-spdk.conf 00:01:59.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.164 SPDK_TEST_NVMF=1 00:01:59.164 SPDK_TEST_NVME_CLI=1 00:01:59.164 SPDK_TEST_NVMF_NICS=mlx5 00:01:59.164 SPDK_RUN_ASAN=1 00:01:59.164 SPDK_RUN_UBSAN=1 00:01:59.164 NET_TYPE=phy 00:01:59.171 RUN_NIGHTLY=1 00:01:59.175 [Pipeline] readFile 00:01:59.199 [Pipeline] withEnv 00:01:59.201 [Pipeline] { 00:01:59.213 [Pipeline] sh 00:01:59.497 + set -ex 00:01:59.497 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:59.497 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:59.497 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.497 ++ SPDK_TEST_NVMF=1 00:01:59.497 ++ SPDK_TEST_NVME_CLI=1 00:01:59.497 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:59.497 ++ SPDK_RUN_ASAN=1 00:01:59.497 ++ SPDK_RUN_UBSAN=1 00:01:59.497 ++ NET_TYPE=phy 00:01:59.497 ++ RUN_NIGHTLY=1 00:01:59.497 + case $SPDK_TEST_NVMF_NICS in 00:01:59.497 + DRIVERS=mlx5_ib 00:01:59.497 + [[ -n mlx5_ib ]] 00:01:59.497 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:59.497 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:06.059 rmmod: ERROR: Module irdma is not currently loaded 00:02:06.059 rmmod: ERROR: Module i40iw is not currently loaded 00:02:06.059 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:06.059 + true 00:02:06.059 + for D in $DRIVERS 00:02:06.059 + sudo modprobe mlx5_ib 00:02:06.059 + exit 0 00:02:06.068 [Pipeline] } 00:02:06.082 [Pipeline] // withEnv 00:02:06.087 [Pipeline] } 00:02:06.102 [Pipeline] // stage 00:02:06.111 [Pipeline] catchError 00:02:06.113 [Pipeline] { 00:02:06.127 [Pipeline] timeout 00:02:06.127 Timeout set to expire in 1 hr 0 min 00:02:06.129 [Pipeline] { 00:02:06.144 [Pipeline] stage 00:02:06.145 [Pipeline] { (Tests) 00:02:06.159 [Pipeline] sh 00:02:06.445 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.445 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.445 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:06.445 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:06.445 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:06.445 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:06.445 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:06.445 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:06.445 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:06.445 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:06.445 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:06.445 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.445 + source /etc/os-release 00:02:06.445 ++ NAME='Fedora Linux' 00:02:06.445 ++ VERSION='39 (Cloud Edition)' 00:02:06.445 ++ ID=fedora 00:02:06.445 ++ VERSION_ID=39 00:02:06.445 ++ VERSION_CODENAME= 00:02:06.445 ++ PLATFORM_ID=platform:f39 00:02:06.445 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.445 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.445 ++ LOGO=fedora-logo-icon 00:02:06.445 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.445 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.445 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.445 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.445 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.445 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.445 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.445 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.445 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.445 ++ SUPPORT_END=2024-11-12 00:02:06.445 ++ VARIANT='Cloud Edition' 00:02:06.445 ++ VARIANT_ID=cloud 00:02:06.445 + uname -a 00:02:06.445 Linux spdk-wfp-43 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.445 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:09.737 Hugepages 00:02:09.737 node hugesize free / total 00:02:09.737 node0 1048576kB 0 / 0 00:02:09.737 node0 2048kB 0 / 0 00:02:09.737 node1 1048576kB 0 / 0 00:02:09.737 node1 2048kB 0 / 0 00:02:09.737 00:02:09.737 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.737 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:09.737 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:09.737 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:09.737 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:09.737 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:09.737 + rm -f /tmp/spdk-ld-path 00:02:09.737 + source autorun-spdk.conf 00:02:09.737 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.737 ++ SPDK_TEST_NVMF=1 00:02:09.737 ++ SPDK_TEST_NVME_CLI=1 00:02:09.737 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:09.737 ++ SPDK_RUN_ASAN=1 00:02:09.737 ++ SPDK_RUN_UBSAN=1 00:02:09.737 ++ NET_TYPE=phy 00:02:09.737 ++ RUN_NIGHTLY=1 00:02:09.737 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.737 + [[ -n '' ]] 00:02:09.737 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:09.737 + for M in /var/spdk/build-*-manifest.txt 00:02:09.737 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.737 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:09.737 + for M in /var/spdk/build-*-manifest.txt 00:02:09.737 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.737 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:09.737 + for M in /var/spdk/build-*-manifest.txt 00:02:09.737 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.737 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:09.737 ++ uname 00:02:09.737 + [[ Linux == \L\i\n\u\x ]] 00:02:09.737 + sudo dmesg -T 00:02:09.737 + sudo dmesg --clear 00:02:09.737 + dmesg_pid=2859137 00:02:09.737 + [[ Fedora Linux == FreeBSD ]] 00:02:09.737 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.737 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.737 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.737 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:09.737 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:09.737 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.737 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.737 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.737 + sudo dmesg -Tw 00:02:09.737 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.737 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.737 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.737 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.737 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.737 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.737 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.737 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.737 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:09.737 15:06:37 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:09.737 15:06:37 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy 00:02:09.737 15:06:37 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:09.737 15:06:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:09.737 15:06:37 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:09.737 15:06:37 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:09.737 15:06:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:09.737 15:06:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.737 15:06:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.737 15:06:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.737 15:06:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.737 15:06:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.737 15:06:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.737 15:06:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.737 15:06:37 -- paths/export.sh@5 -- $ export PATH 00:02:09.737 15:06:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.737 15:06:37 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:09.738 15:06:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:09.738 15:06:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730901997.XXXXXX 00:02:09.738 15:06:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730901997.0azUuq 00:02:09.738 15:06:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:09.738 15:06:37 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:09.738 15:06:37 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:09.738 15:06:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:09.738 15:06:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.738 15:06:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:09.738 15:06:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:09.738 15:06:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.738 15:06:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:02:09.738 15:06:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:09.738 15:06:37 -- pm/common@17 -- $ local monitor 00:02:09.738 15:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.738 15:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.738 15:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.738 15:06:37 -- pm/common@21 -- $ date +%s 00:02:09.738 15:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.738 15:06:37 -- pm/common@21 -- $ date +%s 00:02:09.738 15:06:37 -- pm/common@25 -- $ sleep 1 00:02:09.738 15:06:37 -- pm/common@21 -- $ date +%s 00:02:09.738 15:06:37 -- pm/common@21 -- $ date +%s 00:02:09.738 15:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901997 00:02:09.738 15:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901997 00:02:09.738 15:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901997 00:02:09.738 15:06:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901997 00:02:09.738 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901997_collect-cpu-load.pm.log 00:02:09.738 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901997_collect-vmstat.pm.log 00:02:09.738 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901997_collect-cpu-temp.pm.log 00:02:09.738 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901997_collect-bmc-pm.bmc.pm.log 00:02:10.676 15:06:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:10.676 15:06:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.676 15:06:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.676 15:06:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:10.676 15:06:38 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.676 Wed Nov 6 02:06:38 PM UTC 2024 00:02:10.676 15:06:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.676 v25.01-pre-170-gd1c46ed8e 00:02:10.935 15:06:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.935 15:06:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.935 15:06:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:10.935 15:06:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:10.935 15:06:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.935 ************************************ 00:02:10.935 START TEST asan 00:02:10.935 ************************************ 00:02:10.935 15:06:38 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:10.935 using asan 00:02:10.935 00:02:10.935 real 0m0.001s 00:02:10.935 user 0m0.001s 00:02:10.935 sys 0m0.000s 00:02:10.935 15:06:38 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:10.935 15:06:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.935 ************************************ 00:02:10.935 END TEST asan 00:02:10.935 ************************************ 00:02:10.935 15:06:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.935 15:06:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.935 15:06:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:10.935 15:06:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:10.935 15:06:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.935 ************************************ 00:02:10.935 START TEST ubsan 00:02:10.935 ************************************ 00:02:10.935 15:06:38 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:10.935 using ubsan 00:02:10.935 00:02:10.935 real 0m0.000s 00:02:10.935 user 0m0.000s 00:02:10.935 sys 0m0.000s 00:02:10.935 15:06:38 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:10.935 15:06:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.935 ************************************ 00:02:10.935 END TEST ubsan 00:02:10.936 ************************************ 00:02:10.936 15:06:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.936 15:06:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.936 15:06:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.936 15:06:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:02:11.195 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:11.195 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:11.454 Using 'verbs' RDMA provider 00:02:27.317 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:39.529 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:39.788 Creating mk/config.mk...done. 00:02:39.788 Creating mk/cc.flags.mk...done. 00:02:39.788 Type 'make' to build. 00:02:39.788 15:07:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:02:39.788 15:07:07 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:39.788 15:07:07 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:39.788 15:07:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.788 ************************************ 00:02:39.788 START TEST make 00:02:39.788 ************************************ 00:02:39.788 15:07:07 make -- common/autotest_common.sh@1127 -- $ make -j72 00:02:40.356 make[1]: Nothing to be done for 'all'. 00:02:50.355 The Meson build system 00:02:50.355 Version: 1.5.0 00:02:50.355 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:50.355 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:50.355 Build type: native build 00:02:50.355 Program cat found: YES (/usr/bin/cat) 00:02:50.355 Project name: DPDK 00:02:50.355 Project version: 24.03.0 00:02:50.355 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.355 C linker for the host machine: cc ld.bfd 2.40-14 00:02:50.355 Host machine cpu family: x86_64 00:02:50.355 Host machine cpu: x86_64 00:02:50.355 Message: ## Building in Developer Mode ## 00:02:50.355 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.355 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.355 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.355 Program python3 found: YES (/usr/bin/python3) 00:02:50.355 Program cat found: YES (/usr/bin/cat) 00:02:50.355 Compiler for C supports arguments -march=native: YES 00:02:50.355 Checking for size of "void *" : 8 00:02:50.355 Checking for size of "void *" : 8 (cached) 00:02:50.355 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:50.355 Library m found: YES 00:02:50.355 Library numa found: YES 00:02:50.355 Has header "numaif.h" : YES 00:02:50.355 Library fdt found: NO 00:02:50.355 Library execinfo found: NO 00:02:50.355 Has header "execinfo.h" : YES 00:02:50.355 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.355 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.355 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.355 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.355 Run-time dependency openssl found: YES 3.1.1 00:02:50.355 Run-time dependency libpcap found: YES 1.10.4 00:02:50.355 Has header "pcap.h" with dependency libpcap: YES 00:02:50.355 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.355 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.355 Compiler for C supports arguments -Wformat: YES 00:02:50.355 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.355 Compiler for C supports arguments -Wformat-security: NO 00:02:50.355 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.355 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.355 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.355 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.355 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.355 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.355 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.355 Compiler for C supports arguments -Wundef: YES 00:02:50.356 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.356 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.356 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.356 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.356 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.356 Program objdump found: YES (/usr/bin/objdump) 00:02:50.356 Compiler for C supports arguments -mavx512f: YES 00:02:50.356 Checking if "AVX512 checking" compiles: YES 00:02:50.356 Fetching value of define "__SSE4_2__" : 1 00:02:50.356 Fetching value of define "__AES__" : 1 00:02:50.356 Fetching value of define "__AVX__" : 1 00:02:50.356 Fetching value of define "__AVX2__" : 1 00:02:50.356 Fetching value of define "__AVX512BW__" : 1 00:02:50.356 Fetching value of define "__AVX512CD__" : 1 00:02:50.356 Fetching value of define "__AVX512DQ__" : 1 00:02:50.356 Fetching value of define "__AVX512F__" : 1 00:02:50.356 Fetching value of define "__AVX512VL__" : 1 00:02:50.356 Fetching value of define "__PCLMUL__" : 1 00:02:50.356 Fetching value of define "__RDRND__" : 1 00:02:50.356 Fetching value of define "__RDSEED__" : 1 00:02:50.356 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.356 Fetching value of define "__znver1__" : (undefined) 00:02:50.356 Fetching value of define "__znver2__" : (undefined) 00:02:50.356 Fetching value of define "__znver3__" : (undefined) 00:02:50.356 Fetching value of define "__znver4__" : (undefined) 00:02:50.356 Library asan found: YES 00:02:50.356 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.356 Message: lib/log: Defining dependency "log" 00:02:50.356 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.356 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.356 Library rt found: YES 00:02:50.356 Checking for function "getentropy" : NO 00:02:50.356 Message: lib/eal: Defining dependency "eal" 00:02:50.356 Message: lib/ring: Defining dependency "ring" 00:02:50.356 Message: lib/rcu: Defining dependency "rcu" 00:02:50.356 Message: lib/mempool: Defining dependency "mempool" 00:02:50.356 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.356 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.356 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.356 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.356 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.356 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:50.356 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:50.356 Compiler for C supports arguments -mpclmul: YES 00:02:50.356 Compiler for C supports arguments -maes: YES 00:02:50.356 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.356 Compiler for C supports arguments -mavx512bw: YES 00:02:50.356 Compiler for C supports arguments -mavx512dq: YES 00:02:50.356 Compiler for C supports arguments -mavx512vl: YES 00:02:50.356 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.356 Compiler for C supports arguments -mavx2: YES 00:02:50.356 Compiler for C supports arguments -mavx: YES 00:02:50.356 Message: lib/net: Defining dependency "net" 00:02:50.356 Message: lib/meter: Defining dependency "meter" 00:02:50.356 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.356 Message: lib/pci: Defining dependency "pci" 00:02:50.356 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.356 Message: lib/hash: Defining dependency "hash" 00:02:50.356 Message: lib/timer: Defining dependency "timer" 00:02:50.356 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.356 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.356 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.356 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.356 Message: lib/power: Defining dependency "power" 00:02:50.356 Message: lib/reorder: Defining dependency "reorder" 00:02:50.356 Message: lib/security: Defining dependency "security" 00:02:50.356 Has header "linux/userfaultfd.h" : YES 00:02:50.356 Has header "linux/vduse.h" : YES 00:02:50.356 Message: lib/vhost: Defining dependency "vhost" 00:02:50.356 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.356 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.356 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.356 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.356 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.356 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.356 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.356 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.356 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.356 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.356 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:50.356 Configuring doxy-api-html.conf using configuration 00:02:50.356 Configuring doxy-api-man.conf using configuration 00:02:50.356 Program mandb found: YES (/usr/bin/mandb) 00:02:50.356 Program sphinx-build found: NO 00:02:50.356 Configuring rte_build_config.h using configuration 00:02:50.356 Message: 00:02:50.356 ================= 00:02:50.356 Applications Enabled 00:02:50.356 ================= 00:02:50.356 00:02:50.356 apps: 00:02:50.356 00:02:50.356 00:02:50.356 Message: 00:02:50.356 ================= 00:02:50.356 Libraries Enabled 00:02:50.356 ================= 00:02:50.356 00:02:50.356 libs: 00:02:50.356 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.356 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.356 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.356 00:02:50.356 Message: 00:02:50.356 =============== 00:02:50.356 Drivers Enabled 00:02:50.356 =============== 00:02:50.356 00:02:50.356 common: 00:02:50.356 00:02:50.356 bus: 00:02:50.356 pci, vdev, 00:02:50.356 mempool: 00:02:50.356 ring, 00:02:50.356 dma: 00:02:50.356 00:02:50.356 net: 00:02:50.356 00:02:50.356 crypto: 00:02:50.356 00:02:50.356 compress: 00:02:50.356 00:02:50.356 vdpa: 00:02:50.356 00:02:50.356 00:02:50.356 Message: 00:02:50.356 ================= 00:02:50.356 Content Skipped 00:02:50.356 ================= 00:02:50.356 00:02:50.356 apps: 00:02:50.356 dumpcap: explicitly disabled via build config 00:02:50.356 graph: explicitly disabled via build config 00:02:50.356 pdump: explicitly disabled via build config 00:02:50.356 proc-info: explicitly disabled via build config 00:02:50.356 test-acl: explicitly disabled via build config 00:02:50.356 test-bbdev: explicitly disabled via build config 00:02:50.356 test-cmdline: explicitly disabled via build config 00:02:50.356 test-compress-perf: explicitly disabled via build config 00:02:50.356 test-crypto-perf: explicitly disabled via build config 00:02:50.356 test-dma-perf: explicitly disabled via build config 00:02:50.356 test-eventdev: explicitly disabled via build config 00:02:50.356 test-fib: explicitly disabled via build config 00:02:50.356 test-flow-perf: explicitly disabled via build config 00:02:50.356 test-gpudev: explicitly disabled via build config 00:02:50.356 test-mldev: explicitly disabled via build config 00:02:50.356 test-pipeline: explicitly disabled via build config 00:02:50.356 test-pmd: explicitly disabled via build config 00:02:50.356 test-regex: explicitly disabled via build config 00:02:50.356 test-sad: explicitly disabled via build config 00:02:50.356 test-security-perf: explicitly disabled via build config 00:02:50.356 00:02:50.356 libs: 00:02:50.356 argparse: explicitly disabled via build config 00:02:50.356 metrics: explicitly disabled via build config 00:02:50.356 acl: explicitly disabled via build config 00:02:50.356 bbdev: explicitly disabled via build config 00:02:50.356 bitratestats: explicitly disabled via build config 00:02:50.356 bpf: explicitly disabled via build config 00:02:50.356 cfgfile: explicitly disabled via build config 00:02:50.356 distributor: explicitly disabled via build config 00:02:50.356 efd: explicitly disabled via build config 00:02:50.356 eventdev: explicitly disabled via build config 00:02:50.356 dispatcher: explicitly disabled via build config 00:02:50.356 gpudev: explicitly disabled via build config 00:02:50.357 gro: explicitly disabled via build config 00:02:50.357 gso: explicitly disabled via build config 00:02:50.357 ip_frag: explicitly disabled via build config 00:02:50.357 jobstats: explicitly disabled via build config 00:02:50.357 latencystats: explicitly disabled via build config 00:02:50.357 lpm: explicitly disabled via build config 00:02:50.357 member: explicitly disabled via build config 00:02:50.357 pcapng: explicitly disabled via build config 00:02:50.357 rawdev: explicitly disabled via build config 00:02:50.357 regexdev: explicitly disabled via build config 00:02:50.357 mldev: explicitly disabled via build config 00:02:50.357 rib: explicitly disabled via build config 00:02:50.357 sched: explicitly disabled via build config 00:02:50.357 stack: explicitly disabled via build config 00:02:50.357 ipsec: explicitly disabled via build config 00:02:50.357 pdcp: explicitly disabled via build config 00:02:50.357 fib: explicitly disabled via build config 00:02:50.357 port: explicitly disabled via build config 00:02:50.357 pdump: explicitly disabled via build config 00:02:50.357 table: explicitly disabled via build config 00:02:50.357 pipeline: explicitly disabled via build config 00:02:50.357 graph: explicitly disabled via build config 00:02:50.357 node: explicitly disabled via build config 00:02:50.357 00:02:50.357 drivers: 00:02:50.357 common/cpt: not in enabled drivers build config 00:02:50.357 common/dpaax: not in enabled drivers build config 00:02:50.357 common/iavf: not in enabled drivers build config 00:02:50.357 common/idpf: not in enabled drivers build config 00:02:50.357 common/ionic: not in enabled drivers build config 00:02:50.357 common/mvep: not in enabled drivers build config 00:02:50.357 common/octeontx: not in enabled drivers build config 00:02:50.357 bus/auxiliary: not in enabled drivers build config 00:02:50.357 bus/cdx: not in enabled drivers build config 00:02:50.357 bus/dpaa: not in enabled drivers build config 00:02:50.357 bus/fslmc: not in enabled drivers build config 00:02:50.357 bus/ifpga: not in enabled drivers build config 00:02:50.357 bus/platform: not in enabled drivers build config 00:02:50.357 bus/uacce: not in enabled drivers build config 00:02:50.357 bus/vmbus: not in enabled drivers build config 00:02:50.357 common/cnxk: not in enabled drivers build config 00:02:50.357 common/mlx5: not in enabled drivers build config 00:02:50.357 common/nfp: not in enabled drivers build config 00:02:50.357 common/nitrox: not in enabled drivers build config 00:02:50.357 common/qat: not in enabled drivers build config 00:02:50.357 common/sfc_efx: not in enabled drivers build config 00:02:50.357 mempool/bucket: not in enabled drivers build config 00:02:50.357 mempool/cnxk: not in enabled drivers build config 00:02:50.357 mempool/dpaa: not in enabled drivers build config 00:02:50.357 mempool/dpaa2: not in enabled drivers build config 00:02:50.357 mempool/octeontx: not in enabled drivers build config 00:02:50.357 mempool/stack: not in enabled drivers build config 00:02:50.357 dma/cnxk: not in enabled drivers build config 00:02:50.357 dma/dpaa: not in enabled drivers build config 00:02:50.357 dma/dpaa2: not in enabled drivers build config 00:02:50.357 dma/hisilicon: not in enabled drivers build config 00:02:50.357 dma/idxd: not in enabled drivers build config 00:02:50.357 dma/ioat: not in enabled drivers build config 00:02:50.357 dma/skeleton: not in enabled drivers build config 00:02:50.357 net/af_packet: not in enabled drivers build config 00:02:50.357 net/af_xdp: not in enabled drivers build config 00:02:50.357 net/ark: not in enabled drivers build config 00:02:50.357 net/atlantic: not in enabled drivers build config 00:02:50.357 net/avp: not in enabled drivers build config 00:02:50.357 net/axgbe: not in enabled drivers build config 00:02:50.357 net/bnx2x: not in enabled drivers build config 00:02:50.357 net/bnxt: not in enabled drivers build config 00:02:50.357 net/bonding: not in enabled drivers build config 00:02:50.357 net/cnxk: not in enabled drivers build config 00:02:50.357 net/cpfl: not in enabled drivers build config 00:02:50.357 net/cxgbe: not in enabled drivers build config 00:02:50.357 net/dpaa: not in enabled drivers build config 00:02:50.357 net/dpaa2: not in enabled drivers build config 00:02:50.357 net/e1000: not in enabled drivers build config 00:02:50.357 net/ena: not in enabled drivers build config 00:02:50.357 net/enetc: not in enabled drivers build config 00:02:50.357 net/enetfec: not in enabled drivers build config 00:02:50.357 net/enic: not in enabled drivers build config 00:02:50.357 net/failsafe: not in enabled drivers build config 00:02:50.357 net/fm10k: not in enabled drivers build config 00:02:50.357 net/gve: not in enabled drivers build config 00:02:50.357 net/hinic: not in enabled drivers build config 00:02:50.357 net/hns3: not in enabled drivers build config 00:02:50.357 net/i40e: not in enabled drivers build config 00:02:50.357 net/iavf: not in enabled drivers build config 00:02:50.357 net/ice: not in enabled drivers build config 00:02:50.357 net/idpf: not in enabled drivers build config 00:02:50.357 net/igc: not in enabled drivers build config 00:02:50.357 net/ionic: not in enabled drivers build config 00:02:50.357 net/ipn3ke: not in enabled drivers build config 00:02:50.357 net/ixgbe: not in enabled drivers build config 00:02:50.357 net/mana: not in enabled drivers build config 00:02:50.357 net/memif: not in enabled drivers build config 00:02:50.357 net/mlx4: not in enabled drivers build config 00:02:50.357 net/mlx5: not in enabled drivers build config 00:02:50.357 net/mvneta: not in enabled drivers build config 00:02:50.357 net/mvpp2: not in enabled drivers build config 00:02:50.357 net/netvsc: not in enabled drivers build config 00:02:50.357 net/nfb: not in enabled drivers build config 00:02:50.357 net/nfp: not in enabled drivers build config 00:02:50.357 net/ngbe: not in enabled drivers build config 00:02:50.357 net/null: not in enabled drivers build config 00:02:50.357 net/octeontx: not in enabled drivers build config 00:02:50.357 net/octeon_ep: not in enabled drivers build config 00:02:50.357 net/pcap: not in enabled drivers build config 00:02:50.357 net/pfe: not in enabled drivers build config 00:02:50.357 net/qede: not in enabled drivers build config 00:02:50.357 net/ring: not in enabled drivers build config 00:02:50.357 net/sfc: not in enabled drivers build config 00:02:50.357 net/softnic: not in enabled drivers build config 00:02:50.357 net/tap: not in enabled drivers build config 00:02:50.357 net/thunderx: not in enabled drivers build config 00:02:50.357 net/txgbe: not in enabled drivers build config 00:02:50.357 net/vdev_netvsc: not in enabled drivers build config 00:02:50.357 net/vhost: not in enabled drivers build config 00:02:50.357 net/virtio: not in enabled drivers build config 00:02:50.357 net/vmxnet3: not in enabled drivers build config 00:02:50.357 raw/*: missing internal dependency, "rawdev" 00:02:50.357 crypto/armv8: not in enabled drivers build config 00:02:50.357 crypto/bcmfs: not in enabled drivers build config 00:02:50.357 crypto/caam_jr: not in enabled drivers build config 00:02:50.357 crypto/ccp: not in enabled drivers build config 00:02:50.357 crypto/cnxk: not in enabled drivers build config 00:02:50.357 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.357 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.357 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.357 crypto/mlx5: not in enabled drivers build config 00:02:50.357 crypto/mvsam: not in enabled drivers build config 00:02:50.357 crypto/nitrox: not in enabled drivers build config 00:02:50.357 crypto/null: not in enabled drivers build config 00:02:50.357 crypto/octeontx: not in enabled drivers build config 00:02:50.357 crypto/openssl: not in enabled drivers build config 00:02:50.357 crypto/scheduler: not in enabled drivers build config 00:02:50.357 crypto/uadk: not in enabled drivers build config 00:02:50.357 crypto/virtio: not in enabled drivers build config 00:02:50.357 compress/isal: not in enabled drivers build config 00:02:50.357 compress/mlx5: not in enabled drivers build config 00:02:50.357 compress/nitrox: not in enabled drivers build config 00:02:50.357 compress/octeontx: not in enabled drivers build config 00:02:50.357 compress/zlib: not in enabled drivers build config 00:02:50.357 regex/*: missing internal dependency, "regexdev" 00:02:50.357 ml/*: missing internal dependency, "mldev" 00:02:50.357 vdpa/ifc: not in enabled drivers build config 00:02:50.357 vdpa/mlx5: not in enabled drivers build config 00:02:50.357 vdpa/nfp: not in enabled drivers build config 00:02:50.357 vdpa/sfc: not in enabled drivers build config 00:02:50.357 event/*: missing internal dependency, "eventdev" 00:02:50.357 baseband/*: missing internal dependency, "bbdev" 00:02:50.357 gpu/*: missing internal dependency, "gpudev" 00:02:50.357 00:02:50.357 00:02:50.357 Build targets in project: 85 00:02:50.357 00:02:50.357 DPDK 24.03.0 00:02:50.357 00:02:50.357 User defined options 00:02:50.357 buildtype : debug 00:02:50.357 default_library : shared 00:02:50.357 libdir : lib 00:02:50.357 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:50.357 b_sanitize : address 00:02:50.357 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.357 c_link_args : 00:02:50.357 cpu_instruction_set: native 00:02:50.358 disable_apps : pdump,dumpcap,test-cmdline,test-pmd,test-crypto-perf,test-gpudev,proc-info,graph,test-flow-perf,test-compress-perf,test-fib,test-regex,test-eventdev,test-security-perf,test,test-dma-perf,test-acl,test-pipeline,test-bbdev,test-sad,test-mldev 00:02:50.358 disable_libs : pdump,gpudev,rawdev,pcapng,node,metrics,bitratestats,member,pdcp,eventdev,lpm,table,distributor,regexdev,bpf,acl,stack,ipsec,graph,pipeline,gso,latencystats,jobstats,port,cfgfile,dispatcher,sched,bbdev,gro,rib,argparse,fib,efd,mldev,ip_frag 00:02:50.358 enable_docs : false 00:02:50.358 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.358 enable_kmods : false 00:02:50.358 max_lcores : 128 00:02:50.358 tests : false 00:02:50.358 00:02:50.358 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.358 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:50.358 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.358 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.358 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.358 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.358 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.358 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.358 [7/268] Linking static target lib/librte_kvargs.a 00:02:50.358 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.358 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:50.358 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.358 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.358 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.358 [13/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.358 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.358 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.358 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.358 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.358 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.358 [19/268] Linking static target lib/librte_log.a 00:02:50.627 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.627 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.627 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.627 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.627 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.627 [25/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.627 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:50.627 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.627 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.627 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.627 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.627 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.627 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.627 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.627 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.627 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.627 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:50.627 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.627 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:50.627 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.627 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.627 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.627 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.627 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.627 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.627 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.627 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.627 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.627 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.627 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.627 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.627 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.627 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:50.627 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.627 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.627 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.627 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.627 [57/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.627 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:50.627 [59/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.627 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.627 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.627 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.627 [63/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:50.627 [64/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.627 [65/268] Linking static target lib/librte_ring.a 00:02:50.627 [66/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:50.627 [67/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.627 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.627 [69/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.627 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:50.627 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.627 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.627 [73/268] Linking static target lib/librte_pci.a 00:02:50.627 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.627 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:50.627 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.627 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.627 [78/268] Linking static target lib/librte_telemetry.a 00:02:50.627 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.627 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.627 [81/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.627 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.887 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.887 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.887 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.887 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.887 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.887 [88/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.887 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.887 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.887 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.887 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.887 [93/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.887 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.887 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.887 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.887 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.887 [98/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.887 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.887 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:50.887 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.887 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:50.887 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.887 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.887 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.887 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.887 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.145 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.145 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.145 [110/268] Linking static target lib/librte_mempool.a 00:02:51.145 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.145 [112/268] Linking static target lib/librte_meter.a 00:02:51.145 [113/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.145 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.145 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.145 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.145 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:51.145 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.145 [119/268] Linking static target lib/librte_rcu.a 00:02:51.145 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.145 [121/268] Linking static target lib/librte_net.a 00:02:51.145 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.145 [123/268] Linking static target lib/librte_eal.a 00:02:51.145 [124/268] Linking static target lib/librte_cmdline.a 00:02:51.145 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.145 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.145 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.145 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.145 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.145 [130/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.145 [131/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.145 [132/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.145 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.145 [134/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.145 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.404 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.404 [137/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.404 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.404 [139/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.404 [140/268] Linking static target lib/librte_timer.a 00:02:51.404 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.404 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.404 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.404 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.404 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.404 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.404 [147/268] Linking target lib/librte_log.so.24.1 00:02:51.404 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.404 [149/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.404 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.404 [151/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.404 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.404 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.404 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.404 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.404 [156/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.404 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.404 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.404 [159/268] Linking static target lib/librte_power.a 00:02:51.404 [160/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.404 [161/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.404 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.404 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.404 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.404 [165/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.404 [166/268] Linking static target lib/librte_dmadev.a 00:02:51.404 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.404 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.404 [169/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.404 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.404 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.404 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.404 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.662 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.662 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.662 [176/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.662 [177/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.662 [178/268] Linking static target lib/librte_compressdev.a 00:02:51.662 [179/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.662 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.662 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.662 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.662 [183/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.662 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.662 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.662 [186/268] Linking static target lib/librte_reorder.a 00:02:51.662 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.662 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.662 [189/268] Linking static target drivers/librte_bus_vdev.a 00:02:51.662 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.662 [191/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.662 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.662 [193/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.662 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.663 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.663 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.663 [197/268] Linking static target lib/librte_mbuf.a 00:02:51.663 [198/268] Linking static target lib/librte_security.a 00:02:51.921 [199/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.921 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.921 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.921 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.921 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.921 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.921 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.921 [206/268] Linking static target drivers/librte_bus_pci.a 00:02:51.921 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.921 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.922 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:51.922 [210/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.922 [211/268] Linking static target lib/librte_hash.a 00:02:51.922 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.223 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.223 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.223 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.526 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.526 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.526 [218/268] Linking static target lib/librte_cryptodev.a 00:02:52.526 [219/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.526 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.526 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.526 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.842 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.122 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.122 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.122 [226/268] Linking static target lib/librte_ethdev.a 00:02:54.057 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.625 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.914 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.914 [230/268] Linking static target lib/librte_vhost.a 00:02:59.822 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.111 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.048 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.048 [234/268] Linking target lib/librte_eal.so.24.1 00:03:04.048 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:04.308 [236/268] Linking target lib/librte_ring.so.24.1 00:03:04.308 [237/268] Linking target lib/librte_meter.so.24.1 00:03:04.308 [238/268] Linking target lib/librte_pci.so.24.1 00:03:04.308 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:04.308 [240/268] Linking target lib/librte_timer.so.24.1 00:03:04.308 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:04.308 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:04.308 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:04.308 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:04.308 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:04.308 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:04.308 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:04.308 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:04.308 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:04.567 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:04.567 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:04.567 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:04.567 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:04.826 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.826 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:04.826 [256/268] Linking target lib/librte_net.so.24.1 00:03:04.826 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:04.826 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:04.826 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.826 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:05.085 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:05.085 [262/268] Linking target lib/librte_hash.so.24.1 00:03:05.085 [263/268] Linking target lib/librte_security.so.24.1 00:03:05.085 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:05.085 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:05.085 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.085 [267/268] Linking target lib/librte_power.so.24.1 00:03:05.085 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:05.343 INFO: autodetecting backend as ninja 00:03:05.343 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:03:13.459 CC lib/ut/ut.o 00:03:13.459 CC lib/log/log.o 00:03:13.459 CC lib/log/log_flags.o 00:03:13.459 CC lib/ut_mock/mock.o 00:03:13.459 CC lib/log/log_deprecated.o 00:03:13.459 LIB libspdk_ut_mock.a 00:03:13.459 LIB libspdk_ut.a 00:03:13.459 LIB libspdk_log.a 00:03:13.459 SO libspdk_ut_mock.so.6.0 00:03:13.459 SO libspdk_ut.so.2.0 00:03:13.459 SO libspdk_log.so.7.1 00:03:13.459 SYMLINK libspdk_ut_mock.so 00:03:13.459 SYMLINK libspdk_ut.so 00:03:13.459 SYMLINK libspdk_log.so 00:03:14.027 CC lib/ioat/ioat.o 00:03:14.027 CC lib/dma/dma.o 00:03:14.027 CC lib/util/base64.o 00:03:14.027 CC lib/util/bit_array.o 00:03:14.027 CC lib/util/cpuset.o 00:03:14.027 CC lib/util/crc16.o 00:03:14.027 CC lib/util/crc32.o 00:03:14.027 CC lib/util/crc32c.o 00:03:14.027 CXX lib/trace_parser/trace.o 00:03:14.027 CC lib/util/crc32_ieee.o 00:03:14.027 CC lib/util/crc64.o 00:03:14.027 CC lib/util/dif.o 00:03:14.027 CC lib/util/fd.o 00:03:14.027 CC lib/util/fd_group.o 00:03:14.027 CC lib/util/file.o 00:03:14.027 CC lib/util/hexlify.o 00:03:14.027 CC lib/util/iov.o 00:03:14.027 CC lib/util/math.o 00:03:14.027 CC lib/util/net.o 00:03:14.027 CC lib/util/pipe.o 00:03:14.027 CC lib/util/strerror_tls.o 00:03:14.027 CC lib/util/string.o 00:03:14.027 CC lib/util/uuid.o 00:03:14.027 CC lib/util/xor.o 00:03:14.027 CC lib/util/zipf.o 00:03:14.027 CC lib/util/md5.o 00:03:14.027 CC lib/vfio_user/host/vfio_user_pci.o 00:03:14.027 CC lib/vfio_user/host/vfio_user.o 00:03:14.027 LIB libspdk_dma.a 00:03:14.027 SO libspdk_dma.so.5.0 00:03:14.286 SYMLINK libspdk_dma.so 00:03:14.286 LIB libspdk_ioat.a 00:03:14.286 SO libspdk_ioat.so.7.0 00:03:14.286 SYMLINK libspdk_ioat.so 00:03:14.286 LIB libspdk_vfio_user.a 00:03:14.286 SO libspdk_vfio_user.so.5.0 00:03:14.545 SYMLINK libspdk_vfio_user.so 00:03:14.545 LIB libspdk_util.a 00:03:14.545 SO libspdk_util.so.10.1 00:03:14.803 SYMLINK libspdk_util.so 00:03:14.803 LIB libspdk_trace_parser.a 00:03:14.803 SO libspdk_trace_parser.so.6.0 00:03:14.803 SYMLINK libspdk_trace_parser.so 00:03:15.061 CC lib/idxd/idxd.o 00:03:15.061 CC lib/idxd/idxd_user.o 00:03:15.061 CC lib/idxd/idxd_kernel.o 00:03:15.061 CC lib/json/json_parse.o 00:03:15.061 CC lib/rdma_utils/rdma_utils.o 00:03:15.061 CC lib/json/json_util.o 00:03:15.061 CC lib/conf/conf.o 00:03:15.061 CC lib/json/json_write.o 00:03:15.061 CC lib/vmd/vmd.o 00:03:15.061 CC lib/vmd/led.o 00:03:15.061 CC lib/env_dpdk/env.o 00:03:15.061 CC lib/env_dpdk/memory.o 00:03:15.061 CC lib/env_dpdk/pci.o 00:03:15.061 CC lib/env_dpdk/init.o 00:03:15.061 CC lib/env_dpdk/threads.o 00:03:15.061 CC lib/env_dpdk/pci_ioat.o 00:03:15.061 CC lib/env_dpdk/pci_virtio.o 00:03:15.061 CC lib/env_dpdk/pci_vmd.o 00:03:15.061 CC lib/env_dpdk/pci_idxd.o 00:03:15.061 CC lib/env_dpdk/sigbus_handler.o 00:03:15.061 CC lib/env_dpdk/pci_event.o 00:03:15.061 CC lib/env_dpdk/pci_dpdk.o 00:03:15.061 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.061 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.320 LIB libspdk_conf.a 00:03:15.320 SO libspdk_conf.so.6.0 00:03:15.320 LIB libspdk_rdma_utils.a 00:03:15.320 LIB libspdk_json.a 00:03:15.578 SO libspdk_rdma_utils.so.1.0 00:03:15.578 SO libspdk_json.so.6.0 00:03:15.578 SYMLINK libspdk_conf.so 00:03:15.578 SYMLINK libspdk_rdma_utils.so 00:03:15.578 SYMLINK libspdk_json.so 00:03:15.836 LIB libspdk_idxd.a 00:03:15.836 LIB libspdk_vmd.a 00:03:15.836 SO libspdk_idxd.so.12.1 00:03:15.836 SO libspdk_vmd.so.6.0 00:03:15.836 CC lib/rdma_provider/common.o 00:03:15.836 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.836 SYMLINK libspdk_idxd.so 00:03:15.836 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.836 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.836 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.836 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.836 SYMLINK libspdk_vmd.so 00:03:16.094 LIB libspdk_rdma_provider.a 00:03:16.094 SO libspdk_rdma_provider.so.7.0 00:03:16.094 LIB libspdk_jsonrpc.a 00:03:16.094 SYMLINK libspdk_rdma_provider.so 00:03:16.094 SO libspdk_jsonrpc.so.6.0 00:03:16.354 SYMLINK libspdk_jsonrpc.so 00:03:16.612 LIB libspdk_env_dpdk.a 00:03:16.612 SO libspdk_env_dpdk.so.15.1 00:03:16.612 CC lib/rpc/rpc.o 00:03:16.612 SYMLINK libspdk_env_dpdk.so 00:03:16.871 LIB libspdk_rpc.a 00:03:16.871 SO libspdk_rpc.so.6.0 00:03:16.871 SYMLINK libspdk_rpc.so 00:03:17.438 CC lib/trace/trace.o 00:03:17.438 CC lib/trace/trace_flags.o 00:03:17.438 CC lib/trace/trace_rpc.o 00:03:17.438 CC lib/notify/notify.o 00:03:17.438 CC lib/notify/notify_rpc.o 00:03:17.438 CC lib/keyring/keyring.o 00:03:17.438 CC lib/keyring/keyring_rpc.o 00:03:17.438 LIB libspdk_notify.a 00:03:17.438 SO libspdk_notify.so.6.0 00:03:17.696 LIB libspdk_keyring.a 00:03:17.696 LIB libspdk_trace.a 00:03:17.696 SYMLINK libspdk_notify.so 00:03:17.696 SO libspdk_keyring.so.2.0 00:03:17.696 SO libspdk_trace.so.11.0 00:03:17.696 SYMLINK libspdk_keyring.so 00:03:17.696 SYMLINK libspdk_trace.so 00:03:17.956 CC lib/thread/thread.o 00:03:17.956 CC lib/thread/iobuf.o 00:03:17.956 CC lib/sock/sock.o 00:03:17.956 CC lib/sock/sock_rpc.o 00:03:18.524 LIB libspdk_sock.a 00:03:18.524 SO libspdk_sock.so.10.0 00:03:18.524 SYMLINK libspdk_sock.so 00:03:19.091 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.091 CC lib/nvme/nvme_ctrlr.o 00:03:19.091 CC lib/nvme/nvme_fabric.o 00:03:19.091 CC lib/nvme/nvme_ns_cmd.o 00:03:19.091 CC lib/nvme/nvme_ns.o 00:03:19.091 CC lib/nvme/nvme_pcie_common.o 00:03:19.091 CC lib/nvme/nvme_pcie.o 00:03:19.091 CC lib/nvme/nvme_qpair.o 00:03:19.091 CC lib/nvme/nvme.o 00:03:19.091 CC lib/nvme/nvme_quirks.o 00:03:19.091 CC lib/nvme/nvme_transport.o 00:03:19.091 CC lib/nvme/nvme_discovery.o 00:03:19.091 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.091 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.091 CC lib/nvme/nvme_tcp.o 00:03:19.091 CC lib/nvme/nvme_opal.o 00:03:19.091 CC lib/nvme/nvme_io_msg.o 00:03:19.091 CC lib/nvme/nvme_poll_group.o 00:03:19.091 CC lib/nvme/nvme_stubs.o 00:03:19.091 CC lib/nvme/nvme_zns.o 00:03:19.091 CC lib/nvme/nvme_auth.o 00:03:19.091 CC lib/nvme/nvme_cuse.o 00:03:19.091 CC lib/nvme/nvme_rdma.o 00:03:19.658 LIB libspdk_thread.a 00:03:19.658 SO libspdk_thread.so.11.0 00:03:19.658 SYMLINK libspdk_thread.so 00:03:19.916 CC lib/fsdev/fsdev.o 00:03:19.916 CC lib/fsdev/fsdev_rpc.o 00:03:19.916 CC lib/fsdev/fsdev_io.o 00:03:19.916 CC lib/virtio/virtio.o 00:03:19.916 CC lib/virtio/virtio_vfio_user.o 00:03:19.916 CC lib/virtio/virtio_vhost_user.o 00:03:19.916 CC lib/virtio/virtio_pci.o 00:03:19.916 CC lib/accel/accel_rpc.o 00:03:19.916 CC lib/accel/accel.o 00:03:19.916 CC lib/blob/request.o 00:03:19.916 CC lib/blob/blobstore.o 00:03:19.916 CC lib/accel/accel_sw.o 00:03:19.916 CC lib/init/json_config.o 00:03:19.916 CC lib/blob/zeroes.o 00:03:19.916 CC lib/blob/blob_bs_dev.o 00:03:19.916 CC lib/init/subsystem.o 00:03:19.916 CC lib/init/subsystem_rpc.o 00:03:19.916 CC lib/init/rpc.o 00:03:20.174 LIB libspdk_init.a 00:03:20.174 SO libspdk_init.so.6.0 00:03:20.432 LIB libspdk_virtio.a 00:03:20.432 SYMLINK libspdk_init.so 00:03:20.432 SO libspdk_virtio.so.7.0 00:03:20.432 SYMLINK libspdk_virtio.so 00:03:20.690 LIB libspdk_fsdev.a 00:03:20.690 SO libspdk_fsdev.so.2.0 00:03:20.690 CC lib/event/app.o 00:03:20.690 CC lib/event/reactor.o 00:03:20.690 CC lib/event/log_rpc.o 00:03:20.690 CC lib/event/app_rpc.o 00:03:20.690 CC lib/event/scheduler_static.o 00:03:20.690 SYMLINK libspdk_fsdev.so 00:03:21.256 LIB libspdk_accel.a 00:03:21.256 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.256 LIB libspdk_nvme.a 00:03:21.256 SO libspdk_accel.so.16.0 00:03:21.256 LIB libspdk_event.a 00:03:21.256 SYMLINK libspdk_accel.so 00:03:21.256 SO libspdk_event.so.14.0 00:03:21.256 SO libspdk_nvme.so.15.0 00:03:21.256 SYMLINK libspdk_event.so 00:03:21.515 SYMLINK libspdk_nvme.so 00:03:21.515 CC lib/bdev/bdev.o 00:03:21.515 CC lib/bdev/bdev_rpc.o 00:03:21.515 CC lib/bdev/bdev_zone.o 00:03:21.515 CC lib/bdev/part.o 00:03:21.515 CC lib/bdev/scsi_nvme.o 00:03:21.774 LIB libspdk_fuse_dispatcher.a 00:03:21.774 SO libspdk_fuse_dispatcher.so.1.0 00:03:21.774 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.151 LIB libspdk_blob.a 00:03:23.151 SO libspdk_blob.so.11.0 00:03:23.151 SYMLINK libspdk_blob.so 00:03:23.718 CC lib/lvol/lvol.o 00:03:23.718 CC lib/blobfs/blobfs.o 00:03:23.718 CC lib/blobfs/tree.o 00:03:23.976 LIB libspdk_bdev.a 00:03:24.235 SO libspdk_bdev.so.17.0 00:03:24.235 SYMLINK libspdk_bdev.so 00:03:24.493 LIB libspdk_blobfs.a 00:03:24.493 SO libspdk_blobfs.so.10.0 00:03:24.493 LIB libspdk_lvol.a 00:03:24.493 SYMLINK libspdk_blobfs.so 00:03:24.493 SO libspdk_lvol.so.10.0 00:03:24.493 CC lib/nbd/nbd.o 00:03:24.493 CC lib/nbd/nbd_rpc.o 00:03:24.493 CC lib/nvmf/ctrlr_discovery.o 00:03:24.493 CC lib/nvmf/ctrlr.o 00:03:24.493 CC lib/ftl/ftl_core.o 00:03:24.493 CC lib/ublk/ublk.o 00:03:24.493 CC lib/ftl/ftl_init.o 00:03:24.493 CC lib/nvmf/ctrlr_bdev.o 00:03:24.493 CC lib/nvmf/subsystem.o 00:03:24.493 CC lib/ftl/ftl_layout.o 00:03:24.493 CC lib/nvmf/nvmf_rpc.o 00:03:24.493 CC lib/nvmf/nvmf.o 00:03:24.493 CC lib/ublk/ublk_rpc.o 00:03:24.493 CC lib/ftl/ftl_debug.o 00:03:24.493 CC lib/ftl/ftl_io.o 00:03:24.493 CC lib/nvmf/transport.o 00:03:24.493 CC lib/ftl/ftl_sb.o 00:03:24.493 CC lib/nvmf/tcp.o 00:03:24.493 CC lib/ftl/ftl_l2p.o 00:03:24.493 CC lib/nvmf/stubs.o 00:03:24.493 CC lib/scsi/dev.o 00:03:24.493 CC lib/ftl/ftl_l2p_flat.o 00:03:24.493 CC lib/nvmf/mdns_server.o 00:03:24.493 CC lib/ftl/ftl_nv_cache.o 00:03:24.493 CC lib/scsi/lun.o 00:03:24.493 CC lib/scsi/port.o 00:03:24.493 CC lib/nvmf/rdma.o 00:03:24.493 CC lib/ftl/ftl_band.o 00:03:24.493 CC lib/nvmf/auth.o 00:03:24.493 CC lib/scsi/scsi.o 00:03:24.493 CC lib/ftl/ftl_band_ops.o 00:03:24.493 CC lib/scsi/scsi_bdev.o 00:03:24.493 CC lib/scsi/scsi_pr.o 00:03:24.493 CC lib/ftl/ftl_writer.o 00:03:24.493 CC lib/scsi/scsi_rpc.o 00:03:24.493 CC lib/ftl/ftl_rq.o 00:03:24.493 CC lib/ftl/ftl_reloc.o 00:03:24.493 CC lib/scsi/task.o 00:03:24.493 CC lib/ftl/ftl_p2l.o 00:03:24.493 CC lib/ftl/ftl_l2p_cache.o 00:03:24.493 CC lib/ftl/ftl_p2l_log.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.493 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.760 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.760 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.760 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.760 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.760 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.760 CC lib/ftl/utils/ftl_conf.o 00:03:24.760 CC lib/ftl/utils/ftl_md.o 00:03:24.760 CC lib/ftl/utils/ftl_mempool.o 00:03:24.760 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.760 CC lib/ftl/utils/ftl_property.o 00:03:24.760 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.760 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.760 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.760 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.760 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.760 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.760 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.760 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.760 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.760 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.760 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.760 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.760 SYMLINK libspdk_lvol.so 00:03:24.760 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.021 CC lib/ftl/base/ftl_base_dev.o 00:03:25.021 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.021 CC lib/ftl/ftl_trace.o 00:03:25.281 LIB libspdk_nbd.a 00:03:25.281 SO libspdk_nbd.so.7.0 00:03:25.281 SYMLINK libspdk_nbd.so 00:03:25.281 LIB libspdk_scsi.a 00:03:25.540 SO libspdk_scsi.so.9.0 00:03:25.540 SYMLINK libspdk_scsi.so 00:03:25.540 LIB libspdk_ublk.a 00:03:25.540 SO libspdk_ublk.so.3.0 00:03:25.798 SYMLINK libspdk_ublk.so 00:03:25.798 CC lib/vhost/vhost.o 00:03:25.798 CC lib/vhost/vhost_rpc.o 00:03:25.798 CC lib/vhost/vhost_scsi.o 00:03:25.798 CC lib/vhost/vhost_blk.o 00:03:25.798 CC lib/vhost/rte_vhost_user.o 00:03:25.798 CC lib/iscsi/conn.o 00:03:25.798 CC lib/iscsi/init_grp.o 00:03:25.798 CC lib/iscsi/iscsi.o 00:03:25.798 CC lib/iscsi/param.o 00:03:25.798 LIB libspdk_ftl.a 00:03:25.798 CC lib/iscsi/tgt_node.o 00:03:25.798 CC lib/iscsi/portal_grp.o 00:03:25.798 CC lib/iscsi/iscsi_subsystem.o 00:03:25.798 CC lib/iscsi/task.o 00:03:25.798 CC lib/iscsi/iscsi_rpc.o 00:03:26.056 SO libspdk_ftl.so.9.0 00:03:26.314 SYMLINK libspdk_ftl.so 00:03:26.881 LIB libspdk_vhost.a 00:03:26.881 SO libspdk_vhost.so.8.0 00:03:26.881 SYMLINK libspdk_vhost.so 00:03:27.140 LIB libspdk_nvmf.a 00:03:27.140 SO libspdk_nvmf.so.20.0 00:03:27.400 LIB libspdk_iscsi.a 00:03:27.400 SO libspdk_iscsi.so.8.0 00:03:27.400 SYMLINK libspdk_nvmf.so 00:03:27.400 SYMLINK libspdk_iscsi.so 00:03:27.968 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.226 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.226 CC module/sock/posix/posix.o 00:03:28.226 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.226 CC module/fsdev/aio/fsdev_aio.o 00:03:28.226 CC module/blob/bdev/blob_bdev.o 00:03:28.226 CC module/accel/dsa/accel_dsa.o 00:03:28.226 LIB libspdk_env_dpdk_rpc.a 00:03:28.226 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.226 CC module/accel/iaa/accel_iaa.o 00:03:28.226 CC module/accel/ioat/accel_ioat.o 00:03:28.226 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.226 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.226 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.226 CC module/keyring/linux/keyring.o 00:03:28.226 CC module/keyring/linux/keyring_rpc.o 00:03:28.226 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.226 CC module/accel/error/accel_error.o 00:03:28.226 CC module/accel/error/accel_error_rpc.o 00:03:28.226 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.226 CC module/keyring/file/keyring_rpc.o 00:03:28.226 CC module/keyring/file/keyring.o 00:03:28.226 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.226 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.485 LIB libspdk_keyring_linux.a 00:03:28.485 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.485 LIB libspdk_scheduler_gscheduler.a 00:03:28.485 SO libspdk_keyring_linux.so.1.0 00:03:28.485 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.485 LIB libspdk_keyring_file.a 00:03:28.485 LIB libspdk_accel_ioat.a 00:03:28.485 LIB libspdk_accel_error.a 00:03:28.485 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.485 LIB libspdk_accel_iaa.a 00:03:28.485 LIB libspdk_scheduler_dynamic.a 00:03:28.485 SO libspdk_keyring_file.so.2.0 00:03:28.485 SO libspdk_accel_ioat.so.6.0 00:03:28.485 SO libspdk_accel_error.so.2.0 00:03:28.485 SO libspdk_accel_iaa.so.3.0 00:03:28.485 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.485 SYMLINK libspdk_keyring_linux.so 00:03:28.485 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.485 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.485 SYMLINK libspdk_keyring_file.so 00:03:28.485 LIB libspdk_accel_dsa.a 00:03:28.485 LIB libspdk_blob_bdev.a 00:03:28.485 SYMLINK libspdk_accel_ioat.so 00:03:28.485 SYMLINK libspdk_accel_error.so 00:03:28.485 SO libspdk_blob_bdev.so.11.0 00:03:28.485 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.485 SO libspdk_accel_dsa.so.5.0 00:03:28.485 SYMLINK libspdk_accel_iaa.so 00:03:28.744 SYMLINK libspdk_blob_bdev.so 00:03:28.744 SYMLINK libspdk_accel_dsa.so 00:03:29.003 LIB libspdk_fsdev_aio.a 00:03:29.003 SO libspdk_fsdev_aio.so.1.0 00:03:29.003 LIB libspdk_sock_posix.a 00:03:29.003 SO libspdk_sock_posix.so.6.0 00:03:29.003 SYMLINK libspdk_fsdev_aio.so 00:03:29.003 SYMLINK libspdk_sock_posix.so 00:03:29.262 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.262 CC module/bdev/nvme/bdev_nvme.o 00:03:29.262 CC module/bdev/nvme/nvme_rpc.o 00:03:29.262 CC module/bdev/nvme/vbdev_opal.o 00:03:29.262 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.262 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.262 CC module/bdev/gpt/gpt.o 00:03:29.262 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.262 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.262 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.262 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.262 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.262 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.262 CC module/bdev/null/bdev_null.o 00:03:29.262 CC module/bdev/null/bdev_null_rpc.o 00:03:29.262 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.262 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.262 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.262 CC module/bdev/aio/bdev_aio.o 00:03:29.262 CC module/bdev/ftl/bdev_ftl.o 00:03:29.262 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.262 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.262 CC module/bdev/delay/vbdev_delay.o 00:03:29.262 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.262 CC module/bdev/error/vbdev_error.o 00:03:29.262 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.262 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.262 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.262 CC module/bdev/malloc/bdev_malloc.o 00:03:29.262 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.262 CC module/bdev/split/vbdev_split.o 00:03:29.262 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.262 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.262 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.262 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.262 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.262 CC module/bdev/raid/bdev_raid.o 00:03:29.262 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.262 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.262 CC module/bdev/raid/raid0.o 00:03:29.262 CC module/bdev/raid/raid1.o 00:03:29.262 CC module/bdev/raid/concat.o 00:03:29.520 LIB libspdk_bdev_split.a 00:03:29.520 LIB libspdk_blobfs_bdev.a 00:03:29.520 SO libspdk_blobfs_bdev.so.6.0 00:03:29.520 SO libspdk_bdev_split.so.6.0 00:03:29.520 LIB libspdk_bdev_null.a 00:03:29.520 LIB libspdk_bdev_ftl.a 00:03:29.520 SO libspdk_bdev_null.so.6.0 00:03:29.520 LIB libspdk_bdev_passthru.a 00:03:29.520 SYMLINK libspdk_blobfs_bdev.so 00:03:29.520 SYMLINK libspdk_bdev_split.so 00:03:29.520 SO libspdk_bdev_ftl.so.6.0 00:03:29.520 SO libspdk_bdev_passthru.so.6.0 00:03:29.520 LIB libspdk_bdev_aio.a 00:03:29.520 LIB libspdk_bdev_error.a 00:03:29.520 SYMLINK libspdk_bdev_null.so 00:03:29.520 LIB libspdk_bdev_iscsi.a 00:03:29.520 LIB libspdk_bdev_delay.a 00:03:29.520 LIB libspdk_bdev_malloc.a 00:03:29.520 SO libspdk_bdev_aio.so.6.0 00:03:29.520 LIB libspdk_bdev_gpt.a 00:03:29.520 SYMLINK libspdk_bdev_ftl.so 00:03:29.520 SO libspdk_bdev_error.so.6.0 00:03:29.520 SO libspdk_bdev_iscsi.so.6.0 00:03:29.520 SO libspdk_bdev_malloc.so.6.0 00:03:29.520 SYMLINK libspdk_bdev_passthru.so 00:03:29.520 SO libspdk_bdev_delay.so.6.0 00:03:29.778 SO libspdk_bdev_gpt.so.6.0 00:03:29.778 LIB libspdk_bdev_zone_block.a 00:03:29.778 SYMLINK libspdk_bdev_aio.so 00:03:29.778 SO libspdk_bdev_zone_block.so.6.0 00:03:29.778 SYMLINK libspdk_bdev_error.so 00:03:29.778 SYMLINK libspdk_bdev_malloc.so 00:03:29.778 SYMLINK libspdk_bdev_iscsi.so 00:03:29.778 SYMLINK libspdk_bdev_delay.so 00:03:29.778 SYMLINK libspdk_bdev_gpt.so 00:03:29.778 SYMLINK libspdk_bdev_zone_block.so 00:03:29.778 LIB libspdk_bdev_lvol.a 00:03:29.778 LIB libspdk_bdev_virtio.a 00:03:29.778 SO libspdk_bdev_lvol.so.6.0 00:03:29.778 SO libspdk_bdev_virtio.so.6.0 00:03:30.036 SYMLINK libspdk_bdev_lvol.so 00:03:30.036 SYMLINK libspdk_bdev_virtio.so 00:03:30.295 LIB libspdk_bdev_raid.a 00:03:30.295 SO libspdk_bdev_raid.so.6.0 00:03:30.295 SYMLINK libspdk_bdev_raid.so 00:03:31.672 LIB libspdk_bdev_nvme.a 00:03:31.672 SO libspdk_bdev_nvme.so.7.1 00:03:31.672 SYMLINK libspdk_bdev_nvme.so 00:03:32.611 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.611 CC module/event/subsystems/vmd/vmd.o 00:03:32.611 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.611 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.611 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.611 CC module/event/subsystems/sock/sock.o 00:03:32.611 CC module/event/subsystems/keyring/keyring.o 00:03:32.611 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.611 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.611 LIB libspdk_event_fsdev.a 00:03:32.611 LIB libspdk_event_vmd.a 00:03:32.611 LIB libspdk_event_scheduler.a 00:03:32.611 LIB libspdk_event_keyring.a 00:03:32.611 LIB libspdk_event_vhost_blk.a 00:03:32.611 LIB libspdk_event_sock.a 00:03:32.611 SO libspdk_event_fsdev.so.1.0 00:03:32.611 LIB libspdk_event_iobuf.a 00:03:32.611 SO libspdk_event_keyring.so.1.0 00:03:32.611 SO libspdk_event_scheduler.so.4.0 00:03:32.611 SO libspdk_event_vmd.so.6.0 00:03:32.611 SO libspdk_event_vhost_blk.so.3.0 00:03:32.611 SO libspdk_event_sock.so.5.0 00:03:32.611 SO libspdk_event_iobuf.so.3.0 00:03:32.611 SYMLINK libspdk_event_fsdev.so 00:03:32.611 SYMLINK libspdk_event_keyring.so 00:03:32.870 SYMLINK libspdk_event_scheduler.so 00:03:32.870 SYMLINK libspdk_event_vhost_blk.so 00:03:32.870 SYMLINK libspdk_event_vmd.so 00:03:32.870 SYMLINK libspdk_event_sock.so 00:03:32.870 SYMLINK libspdk_event_iobuf.so 00:03:33.128 CC module/event/subsystems/accel/accel.o 00:03:33.387 LIB libspdk_event_accel.a 00:03:33.387 SO libspdk_event_accel.so.6.0 00:03:33.387 SYMLINK libspdk_event_accel.so 00:03:33.954 CC module/event/subsystems/bdev/bdev.o 00:03:33.954 LIB libspdk_event_bdev.a 00:03:33.954 SO libspdk_event_bdev.so.6.0 00:03:33.954 SYMLINK libspdk_event_bdev.so 00:03:34.522 CC module/event/subsystems/nbd/nbd.o 00:03:34.522 CC module/event/subsystems/ublk/ublk.o 00:03:34.522 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.522 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.522 CC module/event/subsystems/scsi/scsi.o 00:03:34.522 LIB libspdk_event_ublk.a 00:03:34.522 LIB libspdk_event_nbd.a 00:03:34.522 LIB libspdk_event_scsi.a 00:03:34.522 SO libspdk_event_nbd.so.6.0 00:03:34.522 SO libspdk_event_ublk.so.3.0 00:03:34.522 SO libspdk_event_scsi.so.6.0 00:03:34.522 LIB libspdk_event_nvmf.a 00:03:34.522 SYMLINK libspdk_event_nbd.so 00:03:34.781 SYMLINK libspdk_event_ublk.so 00:03:34.781 SO libspdk_event_nvmf.so.6.0 00:03:34.781 SYMLINK libspdk_event_scsi.so 00:03:34.781 SYMLINK libspdk_event_nvmf.so 00:03:35.040 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.041 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.299 LIB libspdk_event_vhost_scsi.a 00:03:35.299 LIB libspdk_event_iscsi.a 00:03:35.299 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.300 SO libspdk_event_iscsi.so.6.0 00:03:35.300 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.300 SYMLINK libspdk_event_iscsi.so 00:03:35.558 SO libspdk.so.6.0 00:03:35.558 SYMLINK libspdk.so 00:03:35.816 CC app/spdk_top/spdk_top.o 00:03:35.816 TEST_HEADER include/spdk/accel.h 00:03:35.816 TEST_HEADER include/spdk/accel_module.h 00:03:35.816 TEST_HEADER include/spdk/assert.h 00:03:35.816 TEST_HEADER include/spdk/base64.h 00:03:35.816 TEST_HEADER include/spdk/barrier.h 00:03:35.816 TEST_HEADER include/spdk/bdev.h 00:03:35.816 TEST_HEADER include/spdk/bdev_module.h 00:03:35.816 CC app/trace_record/trace_record.o 00:03:35.816 CC app/spdk_lspci/spdk_lspci.o 00:03:35.816 CXX app/trace/trace.o 00:03:35.816 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.816 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.816 TEST_HEADER include/spdk/bit_pool.h 00:03:35.816 TEST_HEADER include/spdk/bit_array.h 00:03:35.816 TEST_HEADER include/spdk/blobfs.h 00:03:35.816 TEST_HEADER include/spdk/conf.h 00:03:35.816 TEST_HEADER include/spdk/blob.h 00:03:35.816 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.816 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.816 CC app/spdk_nvme_identify/identify.o 00:03:35.816 TEST_HEADER include/spdk/cpuset.h 00:03:35.816 TEST_HEADER include/spdk/config.h 00:03:35.816 TEST_HEADER include/spdk/crc16.h 00:03:35.816 TEST_HEADER include/spdk/crc64.h 00:03:35.816 TEST_HEADER include/spdk/crc32.h 00:03:35.816 TEST_HEADER include/spdk/dif.h 00:03:35.816 TEST_HEADER include/spdk/endian.h 00:03:35.816 CC app/spdk_nvme_perf/perf.o 00:03:35.816 TEST_HEADER include/spdk/dma.h 00:03:35.816 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.816 TEST_HEADER include/spdk/event.h 00:03:35.816 TEST_HEADER include/spdk/fd_group.h 00:03:35.816 TEST_HEADER include/spdk/env.h 00:03:35.816 TEST_HEADER include/spdk/file.h 00:03:35.816 TEST_HEADER include/spdk/fd.h 00:03:35.816 TEST_HEADER include/spdk/ftl.h 00:03:35.816 TEST_HEADER include/spdk/fsdev.h 00:03:35.816 TEST_HEADER include/spdk/fsdev_module.h 00:03:35.816 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:35.816 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.816 CC test/rpc_client/rpc_client_test.o 00:03:35.816 TEST_HEADER include/spdk/hexlify.h 00:03:35.816 TEST_HEADER include/spdk/histogram_data.h 00:03:35.816 TEST_HEADER include/spdk/idxd.h 00:03:35.816 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.816 TEST_HEADER include/spdk/init.h 00:03:35.816 TEST_HEADER include/spdk/ioat.h 00:03:35.816 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.816 TEST_HEADER include/spdk/json.h 00:03:35.816 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.816 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.816 CC app/spdk_dd/spdk_dd.o 00:03:35.816 TEST_HEADER include/spdk/keyring.h 00:03:35.816 TEST_HEADER include/spdk/log.h 00:03:35.816 TEST_HEADER include/spdk/likely.h 00:03:35.816 TEST_HEADER include/spdk/keyring_module.h 00:03:35.816 TEST_HEADER include/spdk/lvol.h 00:03:35.816 TEST_HEADER include/spdk/memory.h 00:03:35.816 TEST_HEADER include/spdk/md5.h 00:03:35.816 TEST_HEADER include/spdk/mmio.h 00:03:35.816 TEST_HEADER include/spdk/nbd.h 00:03:35.816 TEST_HEADER include/spdk/net.h 00:03:35.816 TEST_HEADER include/spdk/notify.h 00:03:35.816 TEST_HEADER include/spdk/nvme.h 00:03:35.816 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.816 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.816 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.816 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.816 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.816 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.816 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.816 TEST_HEADER include/spdk/nvmf.h 00:03:35.816 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.816 CC app/nvmf_tgt/nvmf_main.o 00:03:35.816 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.816 TEST_HEADER include/spdk/opal.h 00:03:35.816 TEST_HEADER include/spdk/opal_spec.h 00:03:35.816 TEST_HEADER include/spdk/pipe.h 00:03:35.816 TEST_HEADER include/spdk/queue.h 00:03:35.816 TEST_HEADER include/spdk/reduce.h 00:03:35.816 TEST_HEADER include/spdk/pci_ids.h 00:03:35.816 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:35.816 TEST_HEADER include/spdk/scheduler.h 00:03:35.816 TEST_HEADER include/spdk/rpc.h 00:03:35.816 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.816 TEST_HEADER include/spdk/scsi.h 00:03:35.816 TEST_HEADER include/spdk/sock.h 00:03:35.816 TEST_HEADER include/spdk/string.h 00:03:35.816 TEST_HEADER include/spdk/thread.h 00:03:35.816 TEST_HEADER include/spdk/stdinc.h 00:03:35.816 TEST_HEADER include/spdk/trace.h 00:03:35.816 TEST_HEADER include/spdk/trace_parser.h 00:03:35.816 TEST_HEADER include/spdk/tree.h 00:03:35.816 TEST_HEADER include/spdk/ublk.h 00:03:35.816 TEST_HEADER include/spdk/util.h 00:03:35.816 TEST_HEADER include/spdk/uuid.h 00:03:35.816 TEST_HEADER include/spdk/version.h 00:03:35.816 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.816 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.816 TEST_HEADER include/spdk/vhost.h 00:03:35.816 TEST_HEADER include/spdk/vmd.h 00:03:35.816 TEST_HEADER include/spdk/xor.h 00:03:35.816 TEST_HEADER include/spdk/zipf.h 00:03:35.816 CXX test/cpp_headers/accel.o 00:03:36.081 CXX test/cpp_headers/accel_module.o 00:03:36.081 CXX test/cpp_headers/assert.o 00:03:36.081 CXX test/cpp_headers/barrier.o 00:03:36.081 CXX test/cpp_headers/bdev.o 00:03:36.081 CXX test/cpp_headers/base64.o 00:03:36.081 CXX test/cpp_headers/bdev_module.o 00:03:36.081 CXX test/cpp_headers/bdev_zone.o 00:03:36.081 CXX test/cpp_headers/bit_array.o 00:03:36.081 CXX test/cpp_headers/bit_pool.o 00:03:36.081 CXX test/cpp_headers/blob_bdev.o 00:03:36.081 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.081 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.081 CXX test/cpp_headers/blobfs.o 00:03:36.081 CXX test/cpp_headers/blob.o 00:03:36.081 CXX test/cpp_headers/conf.o 00:03:36.081 CXX test/cpp_headers/config.o 00:03:36.081 CXX test/cpp_headers/crc16.o 00:03:36.081 CXX test/cpp_headers/cpuset.o 00:03:36.081 CXX test/cpp_headers/crc32.o 00:03:36.081 CXX test/cpp_headers/crc64.o 00:03:36.081 CXX test/cpp_headers/dma.o 00:03:36.081 CXX test/cpp_headers/dif.o 00:03:36.081 CXX test/cpp_headers/endian.o 00:03:36.081 CXX test/cpp_headers/env_dpdk.o 00:03:36.081 CXX test/cpp_headers/env.o 00:03:36.081 CXX test/cpp_headers/event.o 00:03:36.081 CXX test/cpp_headers/fd_group.o 00:03:36.081 CXX test/cpp_headers/fd.o 00:03:36.081 CXX test/cpp_headers/file.o 00:03:36.081 CXX test/cpp_headers/fsdev.o 00:03:36.081 CXX test/cpp_headers/fsdev_module.o 00:03:36.081 CXX test/cpp_headers/ftl.o 00:03:36.081 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.081 CXX test/cpp_headers/gpt_spec.o 00:03:36.081 CXX test/cpp_headers/hexlify.o 00:03:36.081 CXX test/cpp_headers/histogram_data.o 00:03:36.081 CXX test/cpp_headers/idxd.o 00:03:36.081 CXX test/cpp_headers/idxd_spec.o 00:03:36.081 CXX test/cpp_headers/init.o 00:03:36.081 CXX test/cpp_headers/ioat.o 00:03:36.081 CXX test/cpp_headers/ioat_spec.o 00:03:36.081 CXX test/cpp_headers/iscsi_spec.o 00:03:36.081 CC app/spdk_tgt/spdk_tgt.o 00:03:36.081 CXX test/cpp_headers/json.o 00:03:36.081 CC examples/ioat/perf/perf.o 00:03:36.081 CC examples/ioat/verify/verify.o 00:03:36.081 CC examples/util/zipf/zipf.o 00:03:36.081 CC app/fio/nvme/fio_plugin.o 00:03:36.081 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.081 CC test/env/vtophys/vtophys.o 00:03:36.081 CC test/env/pci/pci_ut.o 00:03:36.081 CC test/app/stub/stub.o 00:03:36.081 CC test/app/jsoncat/jsoncat.o 00:03:36.081 CC test/app/histogram_perf/histogram_perf.o 00:03:36.081 CC test/thread/poller_perf/poller_perf.o 00:03:36.081 CC test/env/memory/memory_ut.o 00:03:36.081 CC app/fio/bdev/fio_plugin.o 00:03:36.081 CC test/app/bdev_svc/bdev_svc.o 00:03:36.081 CC test/dma/test_dma/test_dma.o 00:03:36.341 LINK spdk_lspci 00:03:36.341 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.341 LINK nvmf_tgt 00:03:36.341 LINK spdk_nvme_discover 00:03:36.341 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.341 LINK rpc_client_test 00:03:36.341 LINK jsoncat 00:03:36.341 LINK interrupt_tgt 00:03:36.341 LINK spdk_trace_record 00:03:36.603 CXX test/cpp_headers/jsonrpc.o 00:03:36.603 CXX test/cpp_headers/keyring.o 00:03:36.603 CXX test/cpp_headers/keyring_module.o 00:03:36.603 CXX test/cpp_headers/likely.o 00:03:36.603 CXX test/cpp_headers/log.o 00:03:36.603 LINK spdk_tgt 00:03:36.603 CXX test/cpp_headers/lvol.o 00:03:36.603 LINK zipf 00:03:36.603 CXX test/cpp_headers/md5.o 00:03:36.603 CXX test/cpp_headers/memory.o 00:03:36.603 CXX test/cpp_headers/mmio.o 00:03:36.603 CXX test/cpp_headers/nbd.o 00:03:36.603 CXX test/cpp_headers/net.o 00:03:36.603 CXX test/cpp_headers/notify.o 00:03:36.603 CXX test/cpp_headers/nvme.o 00:03:36.603 CXX test/cpp_headers/nvme_intel.o 00:03:36.603 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.603 CXX test/cpp_headers/nvme_spec.o 00:03:36.603 CXX test/cpp_headers/nvme_zns.o 00:03:36.603 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.603 LINK vtophys 00:03:36.603 LINK iscsi_tgt 00:03:36.603 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.603 LINK poller_perf 00:03:36.603 CXX test/cpp_headers/nvmf.o 00:03:36.603 LINK histogram_perf 00:03:36.603 CXX test/cpp_headers/nvmf_spec.o 00:03:36.603 LINK stub 00:03:36.603 LINK env_dpdk_post_init 00:03:36.603 CXX test/cpp_headers/nvmf_transport.o 00:03:36.603 CXX test/cpp_headers/opal.o 00:03:36.603 CXX test/cpp_headers/opal_spec.o 00:03:36.603 CXX test/cpp_headers/pci_ids.o 00:03:36.603 CXX test/cpp_headers/pipe.o 00:03:36.603 CXX test/cpp_headers/queue.o 00:03:36.603 CXX test/cpp_headers/reduce.o 00:03:36.603 CXX test/cpp_headers/rpc.o 00:03:36.603 CXX test/cpp_headers/scheduler.o 00:03:36.603 CXX test/cpp_headers/scsi.o 00:03:36.603 CXX test/cpp_headers/scsi_spec.o 00:03:36.603 CXX test/cpp_headers/sock.o 00:03:36.603 CXX test/cpp_headers/stdinc.o 00:03:36.603 CXX test/cpp_headers/string.o 00:03:36.603 CXX test/cpp_headers/thread.o 00:03:36.603 LINK bdev_svc 00:03:36.603 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.603 CXX test/cpp_headers/trace.o 00:03:36.603 LINK verify 00:03:36.603 CXX test/cpp_headers/trace_parser.o 00:03:36.603 CXX test/cpp_headers/tree.o 00:03:36.603 LINK ioat_perf 00:03:36.603 CXX test/cpp_headers/ublk.o 00:03:36.603 CXX test/cpp_headers/util.o 00:03:36.603 CXX test/cpp_headers/uuid.o 00:03:36.603 CXX test/cpp_headers/version.o 00:03:36.603 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.603 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.603 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.861 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.861 CXX test/cpp_headers/vhost.o 00:03:36.861 CXX test/cpp_headers/vmd.o 00:03:36.861 CXX test/cpp_headers/xor.o 00:03:36.861 CXX test/cpp_headers/zipf.o 00:03:36.861 LINK spdk_dd 00:03:36.861 LINK spdk_trace 00:03:36.861 LINK pci_ut 00:03:37.120 CC examples/sock/hello_world/hello_sock.o 00:03:37.120 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.120 CC examples/vmd/led/led.o 00:03:37.120 CC examples/idxd/perf/perf.o 00:03:37.120 CC test/event/reactor_perf/reactor_perf.o 00:03:37.120 CC test/event/event_perf/event_perf.o 00:03:37.120 CC test/event/reactor/reactor.o 00:03:37.120 CC test/event/app_repeat/app_repeat.o 00:03:37.120 CC examples/thread/thread/thread_ex.o 00:03:37.120 CC test/event/scheduler/scheduler.o 00:03:37.120 LINK nvme_fuzz 00:03:37.120 LINK test_dma 00:03:37.120 LINK spdk_nvme 00:03:37.120 LINK spdk_bdev 00:03:37.120 LINK mem_callbacks 00:03:37.378 LINK led 00:03:37.378 LINK lsvmd 00:03:37.378 LINK reactor_perf 00:03:37.378 LINK reactor 00:03:37.378 LINK event_perf 00:03:37.378 LINK app_repeat 00:03:37.378 LINK hello_sock 00:03:37.378 LINK spdk_nvme_perf 00:03:37.378 LINK vhost_fuzz 00:03:37.378 CC app/vhost/vhost.o 00:03:37.378 LINK spdk_nvme_identify 00:03:37.378 LINK scheduler 00:03:37.378 LINK thread 00:03:37.636 LINK spdk_top 00:03:37.636 LINK idxd_perf 00:03:37.636 LINK vhost 00:03:37.636 CC test/nvme/reset/reset.o 00:03:37.636 CC test/nvme/err_injection/err_injection.o 00:03:37.636 CC test/nvme/simple_copy/simple_copy.o 00:03:37.636 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.636 CC test/nvme/sgl/sgl.o 00:03:37.636 CC test/nvme/boot_partition/boot_partition.o 00:03:37.636 CC test/nvme/startup/startup.o 00:03:37.636 CC test/nvme/aer/aer.o 00:03:37.636 CC test/nvme/e2edp/nvme_dp.o 00:03:37.636 CC test/nvme/connect_stress/connect_stress.o 00:03:37.636 CC test/nvme/fdp/fdp.o 00:03:37.636 CC test/nvme/overhead/overhead.o 00:03:37.636 CC test/nvme/cuse/cuse.o 00:03:37.636 CC test/nvme/compliance/nvme_compliance.o 00:03:37.636 CC test/nvme/reserve/reserve.o 00:03:37.636 CC test/accel/dif/dif.o 00:03:37.636 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.636 CC test/blobfs/mkfs/mkfs.o 00:03:37.894 CC examples/nvme/hello_world/hello_world.o 00:03:37.895 CC examples/nvme/hotplug/hotplug.o 00:03:37.895 CC examples/nvme/arbitration/arbitration.o 00:03:37.895 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.895 CC examples/nvme/reconnect/reconnect.o 00:03:37.895 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.895 CC examples/nvme/abort/abort.o 00:03:37.895 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.895 LINK memory_ut 00:03:37.895 CC test/lvol/esnap/esnap.o 00:03:37.895 CC examples/accel/perf/accel_perf.o 00:03:37.895 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:37.895 CC examples/blob/cli/blobcli.o 00:03:37.895 LINK boot_partition 00:03:37.895 LINK startup 00:03:37.895 CC examples/blob/hello_world/hello_blob.o 00:03:37.895 LINK connect_stress 00:03:37.895 LINK doorbell_aers 00:03:37.895 LINK err_injection 00:03:37.895 LINK fused_ordering 00:03:37.895 LINK simple_copy 00:03:37.895 LINK mkfs 00:03:37.895 LINK pmr_persistence 00:03:37.895 LINK cmb_copy 00:03:37.895 LINK reserve 00:03:38.154 LINK reset 00:03:38.154 LINK sgl 00:03:38.154 LINK nvme_dp 00:03:38.154 LINK overhead 00:03:38.154 LINK hello_world 00:03:38.154 LINK hotplug 00:03:38.154 LINK aer 00:03:38.154 LINK fdp 00:03:38.154 LINK nvme_compliance 00:03:38.154 LINK arbitration 00:03:38.154 LINK reconnect 00:03:38.154 LINK hello_fsdev 00:03:38.154 LINK hello_blob 00:03:38.154 LINK abort 00:03:38.412 LINK nvme_manage 00:03:38.412 LINK blobcli 00:03:38.412 LINK accel_perf 00:03:38.412 LINK dif 00:03:38.670 LINK iscsi_fuzz 00:03:38.927 LINK cuse 00:03:38.928 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.928 CC examples/bdev/bdevperf/bdevperf.o 00:03:39.238 CC test/bdev/bdevio/bdevio.o 00:03:39.238 LINK hello_bdev 00:03:39.543 LINK bdevio 00:03:39.830 LINK bdevperf 00:03:40.398 CC examples/nvmf/nvmf/nvmf.o 00:03:40.657 LINK nvmf 00:03:43.190 LINK esnap 00:03:43.190 00:03:43.190 real 1m3.291s 00:03:43.190 user 8m54.628s 00:03:43.190 sys 3m26.709s 00:03:43.190 15:08:10 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:43.190 15:08:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.190 ************************************ 00:03:43.190 END TEST make 00:03:43.190 ************************************ 00:03:43.190 15:08:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.190 15:08:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.190 15:08:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.190 15:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.190 15:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.190 15:08:10 -- pm/common@44 -- $ pid=2859180 00:03:43.190 15:08:10 -- pm/common@50 -- $ kill -TERM 2859180 00:03:43.190 15:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.190 15:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.190 15:08:10 -- pm/common@44 -- $ pid=2859182 00:03:43.190 15:08:10 -- pm/common@50 -- $ kill -TERM 2859182 00:03:43.190 15:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.190 15:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:43.190 15:08:10 -- pm/common@44 -- $ pid=2859183 00:03:43.190 15:08:10 -- pm/common@50 -- $ kill -TERM 2859183 00:03:43.190 15:08:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.190 15:08:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:43.190 15:08:10 -- pm/common@44 -- $ pid=2859206 00:03:43.190 15:08:10 -- pm/common@50 -- $ sudo -E kill -TERM 2859206 00:03:43.190 15:08:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.191 15:08:10 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:43.451 15:08:10 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:43.451 15:08:10 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:43.451 15:08:10 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:43.451 15:08:10 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:43.451 15:08:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.451 15:08:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.451 15:08:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.451 15:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.451 15:08:10 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.451 15:08:10 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.451 15:08:10 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.451 15:08:10 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.451 15:08:10 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.451 15:08:10 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.451 15:08:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.451 15:08:10 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.451 15:08:10 -- scripts/common.sh@345 -- # : 1 00:03:43.451 15:08:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.451 15:08:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.451 15:08:10 -- scripts/common.sh@365 -- # decimal 1 00:03:43.451 15:08:10 -- scripts/common.sh@353 -- # local d=1 00:03:43.451 15:08:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.451 15:08:10 -- scripts/common.sh@355 -- # echo 1 00:03:43.451 15:08:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.451 15:08:10 -- scripts/common.sh@366 -- # decimal 2 00:03:43.451 15:08:10 -- scripts/common.sh@353 -- # local d=2 00:03:43.451 15:08:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.451 15:08:10 -- scripts/common.sh@355 -- # echo 2 00:03:43.451 15:08:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.451 15:08:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.451 15:08:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.451 15:08:10 -- scripts/common.sh@368 -- # return 0 00:03:43.451 15:08:10 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.451 15:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:43.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.451 --rc genhtml_branch_coverage=1 00:03:43.451 --rc genhtml_function_coverage=1 00:03:43.451 --rc genhtml_legend=1 00:03:43.451 --rc geninfo_all_blocks=1 00:03:43.451 --rc geninfo_unexecuted_blocks=1 00:03:43.451 00:03:43.451 ' 00:03:43.451 15:08:10 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:43.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.451 --rc genhtml_branch_coverage=1 00:03:43.451 --rc genhtml_function_coverage=1 00:03:43.451 --rc genhtml_legend=1 00:03:43.451 --rc geninfo_all_blocks=1 00:03:43.451 --rc geninfo_unexecuted_blocks=1 00:03:43.451 00:03:43.451 ' 00:03:43.451 15:08:10 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:43.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.451 --rc genhtml_branch_coverage=1 00:03:43.451 --rc genhtml_function_coverage=1 00:03:43.451 --rc genhtml_legend=1 00:03:43.451 --rc geninfo_all_blocks=1 00:03:43.451 --rc geninfo_unexecuted_blocks=1 00:03:43.451 00:03:43.451 ' 00:03:43.451 15:08:10 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:43.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.451 --rc genhtml_branch_coverage=1 00:03:43.451 --rc genhtml_function_coverage=1 00:03:43.451 --rc genhtml_legend=1 00:03:43.451 --rc geninfo_all_blocks=1 00:03:43.451 --rc geninfo_unexecuted_blocks=1 00:03:43.451 00:03:43.451 ' 00:03:43.451 15:08:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.451 15:08:10 -- nvmf/common.sh@7 -- # uname -s 00:03:43.451 15:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.451 15:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.451 15:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.451 15:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.451 15:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.451 15:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.451 15:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.451 15:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.451 15:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.451 15:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.451 15:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:03:43.451 15:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:03:43.451 15:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.451 15:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.451 15:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:43.451 15:08:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.451 15:08:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:43.451 15:08:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.451 15:08:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.451 15:08:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.451 15:08:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.451 15:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.451 15:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.451 15:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.451 15:08:10 -- paths/export.sh@5 -- # export PATH 00:03:43.451 15:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.451 15:08:10 -- nvmf/common.sh@51 -- # : 0 00:03:43.451 15:08:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.451 15:08:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.451 15:08:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.451 15:08:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.451 15:08:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.451 15:08:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.451 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.451 15:08:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.451 15:08:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.451 15:08:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.451 15:08:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.451 15:08:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.452 15:08:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.452 15:08:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.452 15:08:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:43.452 15:08:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.452 15:08:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:43.452 15:08:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.452 15:08:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.452 15:08:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.452 15:08:10 -- spdk/autotest.sh@48 -- # udevadm_pid=2919748 00:03:43.452 15:08:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.452 15:08:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.452 15:08:10 -- pm/common@17 -- # local monitor 00:03:43.452 15:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.452 15:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.452 15:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.452 15:08:10 -- pm/common@21 -- # date +%s 00:03:43.452 15:08:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.452 15:08:10 -- pm/common@21 -- # date +%s 00:03:43.452 15:08:10 -- pm/common@25 -- # sleep 1 00:03:43.452 15:08:10 -- pm/common@21 -- # date +%s 00:03:43.452 15:08:11 -- pm/common@21 -- # date +%s 00:03:43.452 15:08:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:03:43.452 15:08:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:03:43.452 15:08:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:03:43.452 15:08:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:03:43.452 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902091_collect-cpu-load.pm.log 00:03:43.452 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902091_collect-vmstat.pm.log 00:03:43.452 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902091_collect-cpu-temp.pm.log 00:03:43.452 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902091_collect-bmc-pm.bmc.pm.log 00:03:44.389 15:08:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.389 15:08:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.389 15:08:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.389 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:03:44.389 15:08:12 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.389 15:08:12 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:44.389 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:03:44.648 15:08:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:44.648 15:08:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.648 15:08:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.648 15:08:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:44.648 15:08:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:44.648 15:08:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.648 15:08:12 -- common/autotest_common.sh@1455 -- # uname 00:03:44.648 15:08:12 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:44.648 15:08:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.648 15:08:12 -- common/autotest_common.sh@1475 -- # uname 00:03:44.648 15:08:12 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:44.648 15:08:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.649 15:08:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.649 lcov: LCOV version 1.15 00:03:44.649 15:08:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:02.739 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.860 15:08:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:10.860 15:08:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.860 15:08:37 -- common/autotest_common.sh@10 -- # set +x 00:04:10.860 15:08:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:10.860 15:08:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.397 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:04:13.397 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:13.397 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:13.657 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:13.657 15:08:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:13.657 15:08:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:13.657 15:08:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:13.657 15:08:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:13.657 15:08:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:13.657 15:08:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:13.657 15:08:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:13.657 15:08:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.657 15:08:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:13.657 15:08:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:13.657 15:08:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:13.657 15:08:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:13.657 15:08:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:13.657 15:08:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:13.657 15:08:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:13.916 No valid GPT data, bailing 00:04:13.916 15:08:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.916 15:08:41 -- scripts/common.sh@394 -- # pt= 00:04:13.916 15:08:41 -- scripts/common.sh@395 -- # return 1 00:04:13.916 15:08:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:13.916 1+0 records in 00:04:13.916 1+0 records out 00:04:13.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466606 s, 225 MB/s 00:04:13.916 15:08:41 -- spdk/autotest.sh@105 -- # sync 00:04:13.916 15:08:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.916 15:08:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.916 15:08:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.192 15:08:46 -- spdk/autotest.sh@111 -- # uname -s 00:04:19.192 15:08:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:19.192 15:08:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:19.192 15:08:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:22.485 Hugepages 00:04:22.485 node hugesize free / total 00:04:22.485 node0 1048576kB 0 / 0 00:04:22.485 node0 2048kB 0 / 0 00:04:22.485 node1 1048576kB 0 / 0 00:04:22.485 node1 2048kB 0 / 0 00:04:22.485 00:04:22.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.485 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:22.485 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:22.485 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:22.485 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:22.485 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:22.485 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:22.485 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:22.485 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:22.485 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:22.744 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:22.744 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:22.744 15:08:50 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.744 15:08:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.744 15:08:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.745 15:08:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:26.039 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.039 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.314 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.314 15:08:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:31.883 15:08:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:31.883 15:08:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:31.883 15:08:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.883 15:08:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:31.883 15:08:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:31.883 15:08:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:31.883 15:08:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.150 15:08:59 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.150 15:08:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:32.150 15:08:59 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:32.150 15:08:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5f:00.0 00:04:32.150 15:08:59 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.501 Waiting for block devices as requested 00:04:35.501 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.501 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.501 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.501 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.762 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:35.762 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:35.762 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.022 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.022 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.022 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.281 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.281 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.281 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.540 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.540 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.540 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.859 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.859 15:09:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:36.859 15:09:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1485 -- # grep 0000:5f:00.0/nvme/nvme 00:04:36.859 15:09:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:36.859 15:09:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:36.859 15:09:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:36.859 15:09:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:36.859 15:09:04 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:36.859 15:09:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:36.859 15:09:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:36.859 15:09:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:36.859 15:09:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:36.859 15:09:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:36.859 15:09:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:36.859 15:09:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:36.859 15:09:04 -- common/autotest_common.sh@1541 -- # continue 00:04:36.859 15:09:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:36.859 15:09:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.859 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.859 15:09:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:36.859 15:09:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.859 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.859 15:09:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:40.149 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.149 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.423 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.423 15:09:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:45.423 15:09:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.423 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.423 15:09:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:45.423 15:09:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:45.423 15:09:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.423 15:09:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:45.423 15:09:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:45.423 15:09:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:45.423 15:09:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:45.423 15:09:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:45.423 15:09:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:45.423 15:09:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:45.423 15:09:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.423 15:09:12 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.423 15:09:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:45.423 15:09:12 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:45.423 15:09:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5f:00.0 00:04:45.423 15:09:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:45.423 15:09:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:45.423 15:09:12 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:45.423 15:09:12 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:45.423 15:09:12 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:45.423 15:09:12 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:45.423 15:09:12 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5f:00.0 00:04:45.423 15:09:12 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5f:00.0 ]] 00:04:45.423 15:09:12 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2932948 00:04:45.423 15:09:12 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.423 15:09:12 -- common/autotest_common.sh@1583 -- # waitforlisten 2932948 00:04:45.423 15:09:12 -- common/autotest_common.sh@833 -- # '[' -z 2932948 ']' 00:04:45.423 15:09:12 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.423 15:09:12 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.423 15:09:12 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.423 15:09:12 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.423 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.423 [2024-11-06 15:09:13.020255] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:45.423 [2024-11-06 15:09:13.020376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932948 ] 00:04:45.682 [2024-11-06 15:09:13.169652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.682 [2024-11-06 15:09:13.276394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.618 15:09:14 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.618 15:09:14 -- common/autotest_common.sh@866 -- # return 0 00:04:46.618 15:09:14 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:46.618 15:09:14 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:46.618 15:09:14 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:49.904 nvme0n1 00:04:49.904 15:09:17 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:49.904 [2024-11-06 15:09:17.310368] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:49.904 request: 00:04:49.904 { 00:04:49.904 "nvme_ctrlr_name": "nvme0", 00:04:49.904 "password": "test", 00:04:49.904 "method": "bdev_nvme_opal_revert", 00:04:49.904 "req_id": 1 00:04:49.904 } 00:04:49.904 Got JSON-RPC error response 00:04:49.904 response: 00:04:49.904 { 00:04:49.904 "code": -32602, 00:04:49.904 "message": "Invalid parameters" 00:04:49.904 } 00:04:49.904 15:09:17 -- common/autotest_common.sh@1589 -- # true 00:04:49.904 15:09:17 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:49.904 15:09:17 -- common/autotest_common.sh@1593 -- # killprocess 2932948 00:04:49.904 15:09:17 -- common/autotest_common.sh@952 -- # '[' -z 2932948 ']' 00:04:49.905 15:09:17 -- common/autotest_common.sh@956 -- # kill -0 2932948 00:04:49.905 15:09:17 -- common/autotest_common.sh@957 -- # uname 00:04:49.905 15:09:17 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.905 15:09:17 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2932948 00:04:49.905 15:09:17 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.905 15:09:17 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.905 15:09:17 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2932948' 00:04:49.905 killing process with pid 2932948 00:04:49.905 15:09:17 -- common/autotest_common.sh@971 -- # kill 2932948 00:04:49.905 15:09:17 -- common/autotest_common.sh@976 -- # wait 2932948 00:04:59.879 15:09:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:59.879 15:09:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:59.879 15:09:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.879 15:09:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.879 15:09:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:59.879 15:09:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.879 15:09:26 -- common/autotest_common.sh@10 -- # set +x 00:04:59.879 15:09:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:59.879 15:09:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:59.879 15:09:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.879 15:09:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.879 15:09:26 -- common/autotest_common.sh@10 -- # set +x 00:04:59.879 ************************************ 00:04:59.879 START TEST env 00:04:59.879 ************************************ 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:59.879 * Looking for test storage... 00:04:59.879 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.879 15:09:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.879 15:09:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.879 15:09:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.879 15:09:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.879 15:09:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.879 15:09:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.879 15:09:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.879 15:09:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.879 15:09:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.879 15:09:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.879 15:09:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.879 15:09:26 env -- scripts/common.sh@344 -- # case "$op" in 00:04:59.879 15:09:26 env -- scripts/common.sh@345 -- # : 1 00:04:59.879 15:09:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.879 15:09:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.879 15:09:26 env -- scripts/common.sh@365 -- # decimal 1 00:04:59.879 15:09:26 env -- scripts/common.sh@353 -- # local d=1 00:04:59.879 15:09:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.879 15:09:26 env -- scripts/common.sh@355 -- # echo 1 00:04:59.879 15:09:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.879 15:09:26 env -- scripts/common.sh@366 -- # decimal 2 00:04:59.879 15:09:26 env -- scripts/common.sh@353 -- # local d=2 00:04:59.879 15:09:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.879 15:09:26 env -- scripts/common.sh@355 -- # echo 2 00:04:59.879 15:09:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.879 15:09:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.879 15:09:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.879 15:09:26 env -- scripts/common.sh@368 -- # return 0 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.879 --rc genhtml_branch_coverage=1 00:04:59.879 --rc genhtml_function_coverage=1 00:04:59.879 --rc genhtml_legend=1 00:04:59.879 --rc geninfo_all_blocks=1 00:04:59.879 --rc geninfo_unexecuted_blocks=1 00:04:59.879 00:04:59.879 ' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.879 --rc genhtml_branch_coverage=1 00:04:59.879 --rc genhtml_function_coverage=1 00:04:59.879 --rc genhtml_legend=1 00:04:59.879 --rc geninfo_all_blocks=1 00:04:59.879 --rc geninfo_unexecuted_blocks=1 00:04:59.879 00:04:59.879 ' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.879 --rc genhtml_branch_coverage=1 00:04:59.879 --rc genhtml_function_coverage=1 00:04:59.879 --rc genhtml_legend=1 00:04:59.879 --rc geninfo_all_blocks=1 00:04:59.879 --rc geninfo_unexecuted_blocks=1 00:04:59.879 00:04:59.879 ' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.879 --rc genhtml_branch_coverage=1 00:04:59.879 --rc genhtml_function_coverage=1 00:04:59.879 --rc genhtml_legend=1 00:04:59.879 --rc geninfo_all_blocks=1 00:04:59.879 --rc geninfo_unexecuted_blocks=1 00:04:59.879 00:04:59.879 ' 00:04:59.879 15:09:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.879 15:09:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.879 ************************************ 00:04:59.879 START TEST env_memory 00:04:59.879 ************************************ 00:04:59.879 15:09:26 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.879 00:04:59.879 00:04:59.879 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.879 http://cunit.sourceforge.net/ 00:04:59.879 00:04:59.879 00:04:59.879 Suite: memory 00:04:59.879 Test: alloc and free memory map ...[2024-11-06 15:09:26.758727] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.879 passed 00:04:59.879 Test: mem map translation ...[2024-11-06 15:09:26.793842] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.879 [2024-11-06 15:09:26.793882] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.879 [2024-11-06 15:09:26.793937] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.879 [2024-11-06 15:09:26.793953] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.879 passed 00:04:59.879 Test: mem map registration ...[2024-11-06 15:09:26.849363] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.879 [2024-11-06 15:09:26.849391] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.879 passed 00:04:59.879 Test: mem map adjacent registrations ...passed 00:04:59.879 00:04:59.879 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.879 suites 1 1 n/a 0 0 00:04:59.879 tests 4 4 4 0 0 00:04:59.879 asserts 152 152 152 0 n/a 00:04:59.879 00:04:59.879 Elapsed time = 0.200 seconds 00:04:59.879 00:04:59.879 real 0m0.237s 00:04:59.879 user 0m0.214s 00:04:59.879 sys 0m0.022s 00:04:59.879 15:09:26 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.879 15:09:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.879 ************************************ 00:04:59.879 END TEST env_memory 00:04:59.879 ************************************ 00:04:59.879 15:09:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.879 15:09:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.880 15:09:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.880 15:09:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.880 ************************************ 00:04:59.880 START TEST env_vtophys 00:04:59.880 ************************************ 00:04:59.880 15:09:27 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.880 EAL: lib.eal log level changed from notice to debug 00:04:59.880 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.880 EAL: Detected lcore 1 as core 1 on socket 0 00:04:59.880 EAL: Detected lcore 2 as core 2 on socket 0 00:04:59.880 EAL: Detected lcore 3 as core 3 on socket 0 00:04:59.880 EAL: Detected lcore 4 as core 4 on socket 0 00:04:59.880 EAL: Detected lcore 5 as core 8 on socket 0 00:04:59.880 EAL: Detected lcore 6 as core 9 on socket 0 00:04:59.880 EAL: Detected lcore 7 as core 10 on socket 0 00:04:59.880 EAL: Detected lcore 8 as core 11 on socket 0 00:04:59.880 EAL: Detected lcore 9 as core 16 on socket 0 00:04:59.880 EAL: Detected lcore 10 as core 17 on socket 0 00:04:59.880 EAL: Detected lcore 11 as core 18 on socket 0 00:04:59.880 EAL: Detected lcore 12 as core 19 on socket 0 00:04:59.880 EAL: Detected lcore 13 as core 20 on socket 0 00:04:59.880 EAL: Detected lcore 14 as core 24 on socket 0 00:04:59.880 EAL: Detected lcore 15 as core 25 on socket 0 00:04:59.880 EAL: Detected lcore 16 as core 26 on socket 0 00:04:59.880 EAL: Detected lcore 17 as core 27 on socket 0 00:04:59.880 EAL: Detected lcore 18 as core 0 on socket 1 00:04:59.880 EAL: Detected lcore 19 as core 1 on socket 1 00:04:59.880 EAL: Detected lcore 20 as core 2 on socket 1 00:04:59.880 EAL: Detected lcore 21 as core 3 on socket 1 00:04:59.880 EAL: Detected lcore 22 as core 4 on socket 1 00:04:59.880 EAL: Detected lcore 23 as core 8 on socket 1 00:04:59.880 EAL: Detected lcore 24 as core 9 on socket 1 00:04:59.880 EAL: Detected lcore 25 as core 10 on socket 1 00:04:59.880 EAL: Detected lcore 26 as core 11 on socket 1 00:04:59.880 EAL: Detected lcore 27 as core 16 on socket 1 00:04:59.880 EAL: Detected lcore 28 as core 17 on socket 1 00:04:59.880 EAL: Detected lcore 29 as core 18 on socket 1 00:04:59.880 EAL: Detected lcore 30 as core 19 on socket 1 00:04:59.880 EAL: Detected lcore 31 as core 20 on socket 1 00:04:59.880 EAL: Detected lcore 32 as core 24 on socket 1 00:04:59.880 EAL: Detected lcore 33 as core 25 on socket 1 00:04:59.880 EAL: Detected lcore 34 as core 26 on socket 1 00:04:59.880 EAL: Detected lcore 35 as core 27 on socket 1 00:04:59.880 EAL: Detected lcore 36 as core 0 on socket 0 00:04:59.880 EAL: Detected lcore 37 as core 1 on socket 0 00:04:59.880 EAL: Detected lcore 38 as core 2 on socket 0 00:04:59.880 EAL: Detected lcore 39 as core 3 on socket 0 00:04:59.880 EAL: Detected lcore 40 as core 4 on socket 0 00:04:59.880 EAL: Detected lcore 41 as core 8 on socket 0 00:04:59.880 EAL: Detected lcore 42 as core 9 on socket 0 00:04:59.880 EAL: Detected lcore 43 as core 10 on socket 0 00:04:59.880 EAL: Detected lcore 44 as core 11 on socket 0 00:04:59.880 EAL: Detected lcore 45 as core 16 on socket 0 00:04:59.880 EAL: Detected lcore 46 as core 17 on socket 0 00:04:59.880 EAL: Detected lcore 47 as core 18 on socket 0 00:04:59.880 EAL: Detected lcore 48 as core 19 on socket 0 00:04:59.880 EAL: Detected lcore 49 as core 20 on socket 0 00:04:59.880 EAL: Detected lcore 50 as core 24 on socket 0 00:04:59.880 EAL: Detected lcore 51 as core 25 on socket 0 00:04:59.880 EAL: Detected lcore 52 as core 26 on socket 0 00:04:59.880 EAL: Detected lcore 53 as core 27 on socket 0 00:04:59.880 EAL: Detected lcore 54 as core 0 on socket 1 00:04:59.880 EAL: Detected lcore 55 as core 1 on socket 1 00:04:59.880 EAL: Detected lcore 56 as core 2 on socket 1 00:04:59.880 EAL: Detected lcore 57 as core 3 on socket 1 00:04:59.880 EAL: Detected lcore 58 as core 4 on socket 1 00:04:59.880 EAL: Detected lcore 59 as core 8 on socket 1 00:04:59.880 EAL: Detected lcore 60 as core 9 on socket 1 00:04:59.880 EAL: Detected lcore 61 as core 10 on socket 1 00:04:59.880 EAL: Detected lcore 62 as core 11 on socket 1 00:04:59.880 EAL: Detected lcore 63 as core 16 on socket 1 00:04:59.880 EAL: Detected lcore 64 as core 17 on socket 1 00:04:59.880 EAL: Detected lcore 65 as core 18 on socket 1 00:04:59.880 EAL: Detected lcore 66 as core 19 on socket 1 00:04:59.880 EAL: Detected lcore 67 as core 20 on socket 1 00:04:59.880 EAL: Detected lcore 68 as core 24 on socket 1 00:04:59.880 EAL: Detected lcore 69 as core 25 on socket 1 00:04:59.880 EAL: Detected lcore 70 as core 26 on socket 1 00:04:59.880 EAL: Detected lcore 71 as core 27 on socket 1 00:04:59.880 EAL: Maximum logical cores by configuration: 128 00:04:59.880 EAL: Detected CPU lcores: 72 00:04:59.880 EAL: Detected NUMA nodes: 2 00:04:59.880 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:59.880 EAL: Detected shared linkage of DPDK 00:04:59.880 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.880 EAL: Bus pci wants IOVA as 'DC' 00:04:59.880 EAL: Buses did not request a specific IOVA mode. 00:04:59.880 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:59.880 EAL: Selected IOVA mode 'VA' 00:04:59.880 EAL: Probing VFIO support... 00:04:59.880 EAL: IOMMU type 1 (Type 1) is supported 00:04:59.880 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:59.880 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:59.880 EAL: VFIO support initialized 00:04:59.880 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.880 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.880 EAL: Setting up physically contiguous memory... 00:04:59.880 EAL: Setting maximum number of open files to 524288 00:04:59.880 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.880 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:59.880 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.880 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:59.880 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.880 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:59.880 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.880 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.880 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:59.880 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:59.880 EAL: Hugepages will be freed exactly as allocated. 00:04:59.880 EAL: No shared files mode enabled, IPC is disabled 00:04:59.880 EAL: No shared files mode enabled, IPC is disabled 00:04:59.880 EAL: TSC frequency is ~2300000 KHz 00:04:59.880 EAL: Main lcore 0 is ready (tid=7fea9f670a40;cpuset=[0]) 00:04:59.880 EAL: Trying to obtain current memory policy. 00:04:59.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.880 EAL: Restoring previous memory policy: 0 00:04:59.880 EAL: request: mp_malloc_sync 00:04:59.880 EAL: No shared files mode enabled, IPC is disabled 00:04:59.880 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.880 EAL: No shared files mode enabled, IPC is disabled 00:04:59.880 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.880 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.880 00:04:59.880 00:04:59.880 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.880 http://cunit.sourceforge.net/ 00:04:59.880 00:04:59.880 00:04:59.880 Suite: components_suite 00:05:00.140 Test: vtophys_malloc_test ...passed 00:05:00.140 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:00.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.140 EAL: Restoring previous memory policy: 4 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was expanded by 4MB 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was shrunk by 4MB 00:05:00.140 EAL: Trying to obtain current memory policy. 00:05:00.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.140 EAL: Restoring previous memory policy: 4 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was expanded by 6MB 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was shrunk by 6MB 00:05:00.140 EAL: Trying to obtain current memory policy. 00:05:00.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.140 EAL: Restoring previous memory policy: 4 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was expanded by 10MB 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was shrunk by 10MB 00:05:00.140 EAL: Trying to obtain current memory policy. 00:05:00.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.140 EAL: Restoring previous memory policy: 4 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was expanded by 18MB 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was shrunk by 18MB 00:05:00.140 EAL: Trying to obtain current memory policy. 00:05:00.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.140 EAL: Restoring previous memory policy: 4 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was expanded by 34MB 00:05:00.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.140 EAL: request: mp_malloc_sync 00:05:00.140 EAL: No shared files mode enabled, IPC is disabled 00:05:00.140 EAL: Heap on socket 0 was shrunk by 34MB 00:05:00.399 EAL: Trying to obtain current memory policy. 00:05:00.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.399 EAL: Restoring previous memory policy: 4 00:05:00.399 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.399 EAL: request: mp_malloc_sync 00:05:00.399 EAL: No shared files mode enabled, IPC is disabled 00:05:00.399 EAL: Heap on socket 0 was expanded by 66MB 00:05:00.399 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.399 EAL: request: mp_malloc_sync 00:05:00.399 EAL: No shared files mode enabled, IPC is disabled 00:05:00.399 EAL: Heap on socket 0 was shrunk by 66MB 00:05:00.684 EAL: Trying to obtain current memory policy. 00:05:00.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.684 EAL: Restoring previous memory policy: 4 00:05:00.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.684 EAL: request: mp_malloc_sync 00:05:00.684 EAL: No shared files mode enabled, IPC is disabled 00:05:00.684 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.684 EAL: request: mp_malloc_sync 00:05:00.684 EAL: No shared files mode enabled, IPC is disabled 00:05:00.684 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.944 EAL: Trying to obtain current memory policy. 00:05:00.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.944 EAL: Restoring previous memory policy: 4 00:05:00.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.944 EAL: request: mp_malloc_sync 00:05:00.944 EAL: No shared files mode enabled, IPC is disabled 00:05:00.944 EAL: Heap on socket 0 was expanded by 258MB 00:05:01.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.512 EAL: request: mp_malloc_sync 00:05:01.512 EAL: No shared files mode enabled, IPC is disabled 00:05:01.512 EAL: Heap on socket 0 was shrunk by 258MB 00:05:02.078 EAL: Trying to obtain current memory policy. 00:05:02.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.078 EAL: Restoring previous memory policy: 4 00:05:02.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.078 EAL: request: mp_malloc_sync 00:05:02.078 EAL: No shared files mode enabled, IPC is disabled 00:05:02.078 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.013 EAL: request: mp_malloc_sync 00:05:03.013 EAL: No shared files mode enabled, IPC is disabled 00:05:03.013 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.949 EAL: Trying to obtain current memory policy. 00:05:03.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.949 EAL: Restoring previous memory policy: 4 00:05:03.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.949 EAL: request: mp_malloc_sync 00:05:03.949 EAL: No shared files mode enabled, IPC is disabled 00:05:03.949 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.850 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.850 EAL: request: mp_malloc_sync 00:05:05.850 EAL: No shared files mode enabled, IPC is disabled 00:05:05.850 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.753 passed 00:05:07.753 00:05:07.753 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.753 suites 1 1 n/a 0 0 00:05:07.753 tests 2 2 2 0 0 00:05:07.753 asserts 497 497 497 0 n/a 00:05:07.753 00:05:07.753 Elapsed time = 7.627 seconds 00:05:07.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.753 EAL: request: mp_malloc_sync 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 00:05:07.753 real 0m7.923s 00:05:07.753 user 0m6.916s 00:05:07.753 sys 0m0.948s 00:05:07.753 15:09:34 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.753 15:09:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 ************************************ 00:05:07.753 END TEST env_vtophys 00:05:07.753 ************************************ 00:05:07.753 15:09:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.753 15:09:34 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.753 15:09:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.753 15:09:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 ************************************ 00:05:07.753 START TEST env_pci 00:05:07.753 ************************************ 00:05:07.753 15:09:35 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.753 00:05:07.753 00:05:07.753 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.753 http://cunit.sourceforge.net/ 00:05:07.753 00:05:07.753 00:05:07.753 Suite: pci 00:05:07.753 Test: pci_hook ...[2024-11-06 15:09:35.071562] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2935974 has claimed it 00:05:07.753 EAL: Cannot find device (10000:00:01.0) 00:05:07.753 EAL: Failed to attach device on primary process 00:05:07.753 passed 00:05:07.753 00:05:07.753 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.753 suites 1 1 n/a 0 0 00:05:07.753 tests 1 1 1 0 0 00:05:07.753 asserts 25 25 25 0 n/a 00:05:07.753 00:05:07.753 Elapsed time = 0.058 seconds 00:05:07.753 00:05:07.753 real 0m0.155s 00:05:07.753 user 0m0.051s 00:05:07.753 sys 0m0.104s 00:05:07.753 15:09:35 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.753 15:09:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 ************************************ 00:05:07.753 END TEST env_pci 00:05:07.753 ************************************ 00:05:07.753 15:09:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.753 15:09:35 env -- env/env.sh@15 -- # uname 00:05:07.753 15:09:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.753 15:09:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.753 15:09:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.753 15:09:35 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:07.753 15:09:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.753 15:09:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 ************************************ 00:05:07.753 START TEST env_dpdk_post_init 00:05:07.753 ************************************ 00:05:07.753 15:09:35 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.753 EAL: Detected CPU lcores: 72 00:05:07.753 EAL: Detected NUMA nodes: 2 00:05:07.753 EAL: Detected shared linkage of DPDK 00:05:07.753 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.013 EAL: Selected IOVA mode 'VA' 00:05:08.013 EAL: VFIO support initialized 00:05:08.013 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.013 EAL: Using IOMMU type 1 (Type 1) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:08.013 EAL: Ignore mapping IO port bar(1) 00:05:08.013 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:08.950 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:08.950 EAL: Ignore mapping IO port bar(1) 00:05:08.950 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:18.925 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:18.925 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:18.925 Starting DPDK initialization... 00:05:18.925 Starting SPDK post initialization... 00:05:18.925 SPDK NVMe probe 00:05:18.925 Attaching to 0000:5f:00.0 00:05:18.925 Attached to 0000:5f:00.0 00:05:18.925 Cleaning up... 00:05:18.925 00:05:18.925 real 0m10.102s 00:05:18.925 user 0m7.773s 00:05:18.925 sys 0m1.378s 00:05:18.925 15:09:45 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.925 15:09:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 ************************************ 00:05:18.925 END TEST env_dpdk_post_init 00:05:18.925 ************************************ 00:05:18.925 15:09:45 env -- env/env.sh@26 -- # uname 00:05:18.925 15:09:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.925 15:09:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.925 15:09:45 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.925 15:09:45 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.925 15:09:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 ************************************ 00:05:18.925 START TEST env_mem_callbacks 00:05:18.925 ************************************ 00:05:18.925 15:09:45 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.925 EAL: Detected CPU lcores: 72 00:05:18.925 EAL: Detected NUMA nodes: 2 00:05:18.925 EAL: Detected shared linkage of DPDK 00:05:18.925 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.925 EAL: Selected IOVA mode 'VA' 00:05:18.925 EAL: VFIO support initialized 00:05:18.925 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.925 00:05:18.925 00:05:18.925 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.925 http://cunit.sourceforge.net/ 00:05:18.925 00:05:18.925 00:05:18.925 Suite: memory 00:05:18.925 Test: test ... 00:05:18.925 register 0x200000200000 2097152 00:05:18.925 malloc 3145728 00:05:18.925 register 0x200000400000 4194304 00:05:18.925 buf 0x2000004fffc0 len 3145728 PASSED 00:05:18.925 malloc 64 00:05:18.925 buf 0x2000004ffec0 len 64 PASSED 00:05:18.925 malloc 4194304 00:05:18.925 register 0x200000800000 6291456 00:05:18.925 buf 0x2000009fffc0 len 4194304 PASSED 00:05:18.925 free 0x2000004fffc0 3145728 00:05:18.925 free 0x2000004ffec0 64 00:05:18.925 unregister 0x200000400000 4194304 PASSED 00:05:18.925 free 0x2000009fffc0 4194304 00:05:18.925 unregister 0x200000800000 6291456 PASSED 00:05:18.925 malloc 8388608 00:05:18.925 register 0x200000400000 10485760 00:05:18.925 buf 0x2000005fffc0 len 8388608 PASSED 00:05:18.925 free 0x2000005fffc0 8388608 00:05:18.925 unregister 0x200000400000 10485760 PASSED 00:05:18.925 passed 00:05:18.925 00:05:18.925 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.925 suites 1 1 n/a 0 0 00:05:18.925 tests 1 1 1 0 0 00:05:18.925 asserts 15 15 15 0 n/a 00:05:18.925 00:05:18.925 Elapsed time = 0.074 seconds 00:05:18.925 00:05:18.925 real 0m0.206s 00:05:18.925 user 0m0.111s 00:05:18.925 sys 0m0.095s 00:05:18.925 15:09:45 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.925 15:09:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 ************************************ 00:05:18.925 END TEST env_mem_callbacks 00:05:18.925 ************************************ 00:05:18.925 00:05:18.925 real 0m19.249s 00:05:18.925 user 0m15.304s 00:05:18.925 sys 0m2.982s 00:05:18.925 15:09:45 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.925 15:09:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 ************************************ 00:05:18.925 END TEST env 00:05:18.925 ************************************ 00:05:18.925 15:09:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.925 15:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.925 15:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.925 15:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 ************************************ 00:05:18.925 START TEST rpc 00:05:18.925 ************************************ 00:05:18.925 15:09:45 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.925 * Looking for test storage... 00:05:18.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:18.925 15:09:45 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.925 15:09:45 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.925 15:09:45 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.925 15:09:45 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.925 15:09:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.925 15:09:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.925 15:09:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.925 15:09:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.925 15:09:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.925 15:09:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.925 15:09:45 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.925 15:09:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.925 15:09:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.925 15:09:45 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.925 15:09:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.925 15:09:45 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.925 15:09:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.925 15:09:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.925 15:09:46 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.925 15:09:46 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.925 15:09:46 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.925 --rc genhtml_branch_coverage=1 00:05:18.925 --rc genhtml_function_coverage=1 00:05:18.925 --rc genhtml_legend=1 00:05:18.925 --rc geninfo_all_blocks=1 00:05:18.925 --rc geninfo_unexecuted_blocks=1 00:05:18.925 00:05:18.925 ' 00:05:18.925 15:09:46 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.925 --rc genhtml_branch_coverage=1 00:05:18.925 --rc genhtml_function_coverage=1 00:05:18.925 --rc genhtml_legend=1 00:05:18.925 --rc geninfo_all_blocks=1 00:05:18.925 --rc geninfo_unexecuted_blocks=1 00:05:18.925 00:05:18.925 ' 00:05:18.925 15:09:46 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.925 --rc genhtml_branch_coverage=1 00:05:18.925 --rc genhtml_function_coverage=1 00:05:18.925 --rc genhtml_legend=1 00:05:18.925 --rc geninfo_all_blocks=1 00:05:18.925 --rc geninfo_unexecuted_blocks=1 00:05:18.925 00:05:18.925 ' 00:05:18.925 15:09:46 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.926 --rc genhtml_branch_coverage=1 00:05:18.926 --rc genhtml_function_coverage=1 00:05:18.926 --rc genhtml_legend=1 00:05:18.926 --rc geninfo_all_blocks=1 00:05:18.926 --rc geninfo_unexecuted_blocks=1 00:05:18.926 00:05:18.926 ' 00:05:18.926 15:09:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2937534 00:05:18.926 15:09:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.926 15:09:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.926 15:09:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2937534 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@833 -- # '[' -z 2937534 ']' 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.926 15:09:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.926 [2024-11-06 15:09:46.106427] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:18.926 [2024-11-06 15:09:46.106538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937534 ] 00:05:18.926 [2024-11-06 15:09:46.241118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.926 [2024-11-06 15:09:46.345930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.926 [2024-11-06 15:09:46.345981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2937534' to capture a snapshot of events at runtime. 00:05:18.926 [2024-11-06 15:09:46.345997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.926 [2024-11-06 15:09:46.346009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.926 [2024-11-06 15:09:46.346025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2937534 for offline analysis/debug. 00:05:18.926 [2024-11-06 15:09:46.347299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.494 15:09:47 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.494 15:09:47 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:19.494 15:09:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:19.494 15:09:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:19.494 15:09:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.494 15:09:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.494 15:09:47 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.494 15:09:47 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.494 15:09:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 ************************************ 00:05:19.753 START TEST rpc_integrity 00:05:19.753 ************************************ 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.753 { 00:05:19.753 "name": "Malloc0", 00:05:19.753 "aliases": [ 00:05:19.753 "cd498421-d97e-4b5f-a727-32932e805754" 00:05:19.753 ], 00:05:19.753 "product_name": "Malloc disk", 00:05:19.753 "block_size": 512, 00:05:19.753 "num_blocks": 16384, 00:05:19.753 "uuid": "cd498421-d97e-4b5f-a727-32932e805754", 00:05:19.753 "assigned_rate_limits": { 00:05:19.753 "rw_ios_per_sec": 0, 00:05:19.753 "rw_mbytes_per_sec": 0, 00:05:19.753 "r_mbytes_per_sec": 0, 00:05:19.753 "w_mbytes_per_sec": 0 00:05:19.753 }, 00:05:19.753 "claimed": false, 00:05:19.753 "zoned": false, 00:05:19.753 "supported_io_types": { 00:05:19.753 "read": true, 00:05:19.753 "write": true, 00:05:19.753 "unmap": true, 00:05:19.753 "flush": true, 00:05:19.753 "reset": true, 00:05:19.753 "nvme_admin": false, 00:05:19.753 "nvme_io": false, 00:05:19.753 "nvme_io_md": false, 00:05:19.753 "write_zeroes": true, 00:05:19.753 "zcopy": true, 00:05:19.753 "get_zone_info": false, 00:05:19.753 "zone_management": false, 00:05:19.753 "zone_append": false, 00:05:19.753 "compare": false, 00:05:19.753 "compare_and_write": false, 00:05:19.753 "abort": true, 00:05:19.753 "seek_hole": false, 00:05:19.753 "seek_data": false, 00:05:19.753 "copy": true, 00:05:19.753 "nvme_iov_md": false 00:05:19.753 }, 00:05:19.753 "memory_domains": [ 00:05:19.753 { 00:05:19.753 "dma_device_id": "system", 00:05:19.753 "dma_device_type": 1 00:05:19.753 }, 00:05:19.753 { 00:05:19.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.753 "dma_device_type": 2 00:05:19.753 } 00:05:19.753 ], 00:05:19.753 "driver_specific": {} 00:05:19.753 } 00:05:19.753 ]' 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.753 [2024-11-06 15:09:47.301352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.753 [2024-11-06 15:09:47.301400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.753 [2024-11-06 15:09:47.301427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001c880 00:05:19.753 [2024-11-06 15:09:47.301440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.753 [2024-11-06 15:09:47.303714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.753 [2024-11-06 15:09:47.303742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.753 Passthru0 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.753 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.753 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.754 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.754 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.754 { 00:05:19.754 "name": "Malloc0", 00:05:19.754 "aliases": [ 00:05:19.754 "cd498421-d97e-4b5f-a727-32932e805754" 00:05:19.754 ], 00:05:19.754 "product_name": "Malloc disk", 00:05:19.754 "block_size": 512, 00:05:19.754 "num_blocks": 16384, 00:05:19.754 "uuid": "cd498421-d97e-4b5f-a727-32932e805754", 00:05:19.754 "assigned_rate_limits": { 00:05:19.754 "rw_ios_per_sec": 0, 00:05:19.754 "rw_mbytes_per_sec": 0, 00:05:19.754 "r_mbytes_per_sec": 0, 00:05:19.754 "w_mbytes_per_sec": 0 00:05:19.754 }, 00:05:19.754 "claimed": true, 00:05:19.754 "claim_type": "exclusive_write", 00:05:19.754 "zoned": false, 00:05:19.754 "supported_io_types": { 00:05:19.754 "read": true, 00:05:19.754 "write": true, 00:05:19.754 "unmap": true, 00:05:19.754 "flush": true, 00:05:19.754 "reset": true, 00:05:19.754 "nvme_admin": false, 00:05:19.754 "nvme_io": false, 00:05:19.754 "nvme_io_md": false, 00:05:19.754 "write_zeroes": true, 00:05:19.754 "zcopy": true, 00:05:19.754 "get_zone_info": false, 00:05:19.754 "zone_management": false, 00:05:19.754 "zone_append": false, 00:05:19.754 "compare": false, 00:05:19.754 "compare_and_write": false, 00:05:19.754 "abort": true, 00:05:19.754 "seek_hole": false, 00:05:19.754 "seek_data": false, 00:05:19.754 "copy": true, 00:05:19.754 "nvme_iov_md": false 00:05:19.754 }, 00:05:19.754 "memory_domains": [ 00:05:19.754 { 00:05:19.754 "dma_device_id": "system", 00:05:19.754 "dma_device_type": 1 00:05:19.754 }, 00:05:19.754 { 00:05:19.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.754 "dma_device_type": 2 00:05:19.754 } 00:05:19.754 ], 00:05:19.754 "driver_specific": {} 00:05:19.754 }, 00:05:19.754 { 00:05:19.754 "name": "Passthru0", 00:05:19.754 "aliases": [ 00:05:19.754 "02eac845-c791-58af-af4f-acafbc7f7dcf" 00:05:19.754 ], 00:05:19.754 "product_name": "passthru", 00:05:19.754 "block_size": 512, 00:05:19.754 "num_blocks": 16384, 00:05:19.754 "uuid": "02eac845-c791-58af-af4f-acafbc7f7dcf", 00:05:19.754 "assigned_rate_limits": { 00:05:19.754 "rw_ios_per_sec": 0, 00:05:19.754 "rw_mbytes_per_sec": 0, 00:05:19.754 "r_mbytes_per_sec": 0, 00:05:19.754 "w_mbytes_per_sec": 0 00:05:19.754 }, 00:05:19.754 "claimed": false, 00:05:19.754 "zoned": false, 00:05:19.754 "supported_io_types": { 00:05:19.754 "read": true, 00:05:19.754 "write": true, 00:05:19.754 "unmap": true, 00:05:19.754 "flush": true, 00:05:19.754 "reset": true, 00:05:19.754 "nvme_admin": false, 00:05:19.754 "nvme_io": false, 00:05:19.754 "nvme_io_md": false, 00:05:19.754 "write_zeroes": true, 00:05:19.754 "zcopy": true, 00:05:19.754 "get_zone_info": false, 00:05:19.754 "zone_management": false, 00:05:19.754 "zone_append": false, 00:05:19.754 "compare": false, 00:05:19.754 "compare_and_write": false, 00:05:19.754 "abort": true, 00:05:19.754 "seek_hole": false, 00:05:19.754 "seek_data": false, 00:05:19.754 "copy": true, 00:05:19.754 "nvme_iov_md": false 00:05:19.754 }, 00:05:19.754 "memory_domains": [ 00:05:19.754 { 00:05:19.754 "dma_device_id": "system", 00:05:19.754 "dma_device_type": 1 00:05:19.754 }, 00:05:19.754 { 00:05:19.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.754 "dma_device_type": 2 00:05:19.754 } 00:05:19.754 ], 00:05:19.754 "driver_specific": { 00:05:19.754 "passthru": { 00:05:19.754 "name": "Passthru0", 00:05:19.754 "base_bdev_name": "Malloc0" 00:05:19.754 } 00:05:19.754 } 00:05:19.754 } 00:05:19.754 ]' 00:05:19.754 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.754 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.754 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.754 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.754 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.013 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.013 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.013 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.013 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.013 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.013 15:09:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.013 00:05:20.014 real 0m0.321s 00:05:20.014 user 0m0.169s 00:05:20.014 sys 0m0.062s 00:05:20.014 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.014 15:09:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.014 ************************************ 00:05:20.014 END TEST rpc_integrity 00:05:20.014 ************************************ 00:05:20.014 15:09:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.014 15:09:47 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.014 15:09:47 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.014 15:09:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.014 ************************************ 00:05:20.014 START TEST rpc_plugins 00:05:20.014 ************************************ 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.014 { 00:05:20.014 "name": "Malloc1", 00:05:20.014 "aliases": [ 00:05:20.014 "22ebbe7f-a88e-4ba7-9177-9321bcd83fd8" 00:05:20.014 ], 00:05:20.014 "product_name": "Malloc disk", 00:05:20.014 "block_size": 4096, 00:05:20.014 "num_blocks": 256, 00:05:20.014 "uuid": "22ebbe7f-a88e-4ba7-9177-9321bcd83fd8", 00:05:20.014 "assigned_rate_limits": { 00:05:20.014 "rw_ios_per_sec": 0, 00:05:20.014 "rw_mbytes_per_sec": 0, 00:05:20.014 "r_mbytes_per_sec": 0, 00:05:20.014 "w_mbytes_per_sec": 0 00:05:20.014 }, 00:05:20.014 "claimed": false, 00:05:20.014 "zoned": false, 00:05:20.014 "supported_io_types": { 00:05:20.014 "read": true, 00:05:20.014 "write": true, 00:05:20.014 "unmap": true, 00:05:20.014 "flush": true, 00:05:20.014 "reset": true, 00:05:20.014 "nvme_admin": false, 00:05:20.014 "nvme_io": false, 00:05:20.014 "nvme_io_md": false, 00:05:20.014 "write_zeroes": true, 00:05:20.014 "zcopy": true, 00:05:20.014 "get_zone_info": false, 00:05:20.014 "zone_management": false, 00:05:20.014 "zone_append": false, 00:05:20.014 "compare": false, 00:05:20.014 "compare_and_write": false, 00:05:20.014 "abort": true, 00:05:20.014 "seek_hole": false, 00:05:20.014 "seek_data": false, 00:05:20.014 "copy": true, 00:05:20.014 "nvme_iov_md": false 00:05:20.014 }, 00:05:20.014 "memory_domains": [ 00:05:20.014 { 00:05:20.014 "dma_device_id": "system", 00:05:20.014 "dma_device_type": 1 00:05:20.014 }, 00:05:20.014 { 00:05:20.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.014 "dma_device_type": 2 00:05:20.014 } 00:05:20.014 ], 00:05:20.014 "driver_specific": {} 00:05:20.014 } 00:05:20.014 ]' 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.014 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.014 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.274 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.274 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.274 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.274 15:09:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.274 00:05:20.274 real 0m0.152s 00:05:20.274 user 0m0.087s 00:05:20.274 sys 0m0.025s 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.274 15:09:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.274 ************************************ 00:05:20.274 END TEST rpc_plugins 00:05:20.274 ************************************ 00:05:20.274 15:09:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.274 15:09:47 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.274 15:09:47 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.274 15:09:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.274 ************************************ 00:05:20.274 START TEST rpc_trace_cmd_test 00:05:20.274 ************************************ 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.274 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2937534", 00:05:20.274 "tpoint_group_mask": "0x8", 00:05:20.274 "iscsi_conn": { 00:05:20.274 "mask": "0x2", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "scsi": { 00:05:20.274 "mask": "0x4", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "bdev": { 00:05:20.274 "mask": "0x8", 00:05:20.274 "tpoint_mask": "0xffffffffffffffff" 00:05:20.274 }, 00:05:20.274 "nvmf_rdma": { 00:05:20.274 "mask": "0x10", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "nvmf_tcp": { 00:05:20.274 "mask": "0x20", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "ftl": { 00:05:20.274 "mask": "0x40", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "blobfs": { 00:05:20.274 "mask": "0x80", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "dsa": { 00:05:20.274 "mask": "0x200", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "thread": { 00:05:20.274 "mask": "0x400", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "nvme_pcie": { 00:05:20.274 "mask": "0x800", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "iaa": { 00:05:20.274 "mask": "0x1000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "nvme_tcp": { 00:05:20.274 "mask": "0x2000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "bdev_nvme": { 00:05:20.274 "mask": "0x4000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "sock": { 00:05:20.274 "mask": "0x8000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "blob": { 00:05:20.274 "mask": "0x10000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "bdev_raid": { 00:05:20.274 "mask": "0x20000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 }, 00:05:20.274 "scheduler": { 00:05:20.274 "mask": "0x40000", 00:05:20.274 "tpoint_mask": "0x0" 00:05:20.274 } 00:05:20.274 }' 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.274 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.533 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.533 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.533 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.534 15:09:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.534 15:09:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.534 00:05:20.534 real 0m0.224s 00:05:20.534 user 0m0.183s 00:05:20.534 sys 0m0.031s 00:05:20.534 15:09:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.534 15:09:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.534 ************************************ 00:05:20.534 END TEST rpc_trace_cmd_test 00:05:20.534 ************************************ 00:05:20.534 15:09:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.534 15:09:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.534 15:09:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.534 15:09:48 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.534 15:09:48 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.534 15:09:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.534 ************************************ 00:05:20.534 START TEST rpc_daemon_integrity 00:05:20.534 ************************************ 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.534 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.793 { 00:05:20.793 "name": "Malloc2", 00:05:20.793 "aliases": [ 00:05:20.793 "0485f2c9-badc-4285-91f9-c52c0be0a91b" 00:05:20.793 ], 00:05:20.793 "product_name": "Malloc disk", 00:05:20.793 "block_size": 512, 00:05:20.793 "num_blocks": 16384, 00:05:20.793 "uuid": "0485f2c9-badc-4285-91f9-c52c0be0a91b", 00:05:20.793 "assigned_rate_limits": { 00:05:20.793 "rw_ios_per_sec": 0, 00:05:20.793 "rw_mbytes_per_sec": 0, 00:05:20.793 "r_mbytes_per_sec": 0, 00:05:20.793 "w_mbytes_per_sec": 0 00:05:20.793 }, 00:05:20.793 "claimed": false, 00:05:20.793 "zoned": false, 00:05:20.793 "supported_io_types": { 00:05:20.793 "read": true, 00:05:20.793 "write": true, 00:05:20.793 "unmap": true, 00:05:20.793 "flush": true, 00:05:20.793 "reset": true, 00:05:20.793 "nvme_admin": false, 00:05:20.793 "nvme_io": false, 00:05:20.793 "nvme_io_md": false, 00:05:20.793 "write_zeroes": true, 00:05:20.793 "zcopy": true, 00:05:20.793 "get_zone_info": false, 00:05:20.793 "zone_management": false, 00:05:20.793 "zone_append": false, 00:05:20.793 "compare": false, 00:05:20.793 "compare_and_write": false, 00:05:20.793 "abort": true, 00:05:20.793 "seek_hole": false, 00:05:20.793 "seek_data": false, 00:05:20.793 "copy": true, 00:05:20.793 "nvme_iov_md": false 00:05:20.793 }, 00:05:20.793 "memory_domains": [ 00:05:20.793 { 00:05:20.793 "dma_device_id": "system", 00:05:20.793 "dma_device_type": 1 00:05:20.793 }, 00:05:20.793 { 00:05:20.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.793 "dma_device_type": 2 00:05:20.793 } 00:05:20.793 ], 00:05:20.793 "driver_specific": {} 00:05:20.793 } 00:05:20.793 ]' 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.793 [2024-11-06 15:09:48.251385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.793 [2024-11-06 15:09:48.251425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.793 [2024-11-06 15:09:48.251454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001da80 00:05:20.793 [2024-11-06 15:09:48.251467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.793 [2024-11-06 15:09:48.253684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.793 [2024-11-06 15:09:48.253711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.793 Passthru0 00:05:20.793 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.794 { 00:05:20.794 "name": "Malloc2", 00:05:20.794 "aliases": [ 00:05:20.794 "0485f2c9-badc-4285-91f9-c52c0be0a91b" 00:05:20.794 ], 00:05:20.794 "product_name": "Malloc disk", 00:05:20.794 "block_size": 512, 00:05:20.794 "num_blocks": 16384, 00:05:20.794 "uuid": "0485f2c9-badc-4285-91f9-c52c0be0a91b", 00:05:20.794 "assigned_rate_limits": { 00:05:20.794 "rw_ios_per_sec": 0, 00:05:20.794 "rw_mbytes_per_sec": 0, 00:05:20.794 "r_mbytes_per_sec": 0, 00:05:20.794 "w_mbytes_per_sec": 0 00:05:20.794 }, 00:05:20.794 "claimed": true, 00:05:20.794 "claim_type": "exclusive_write", 00:05:20.794 "zoned": false, 00:05:20.794 "supported_io_types": { 00:05:20.794 "read": true, 00:05:20.794 "write": true, 00:05:20.794 "unmap": true, 00:05:20.794 "flush": true, 00:05:20.794 "reset": true, 00:05:20.794 "nvme_admin": false, 00:05:20.794 "nvme_io": false, 00:05:20.794 "nvme_io_md": false, 00:05:20.794 "write_zeroes": true, 00:05:20.794 "zcopy": true, 00:05:20.794 "get_zone_info": false, 00:05:20.794 "zone_management": false, 00:05:20.794 "zone_append": false, 00:05:20.794 "compare": false, 00:05:20.794 "compare_and_write": false, 00:05:20.794 "abort": true, 00:05:20.794 "seek_hole": false, 00:05:20.794 "seek_data": false, 00:05:20.794 "copy": true, 00:05:20.794 "nvme_iov_md": false 00:05:20.794 }, 00:05:20.794 "memory_domains": [ 00:05:20.794 { 00:05:20.794 "dma_device_id": "system", 00:05:20.794 "dma_device_type": 1 00:05:20.794 }, 00:05:20.794 { 00:05:20.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.794 "dma_device_type": 2 00:05:20.794 } 00:05:20.794 ], 00:05:20.794 "driver_specific": {} 00:05:20.794 }, 00:05:20.794 { 00:05:20.794 "name": "Passthru0", 00:05:20.794 "aliases": [ 00:05:20.794 "e97a0b73-698e-5130-b524-cb063ca740d2" 00:05:20.794 ], 00:05:20.794 "product_name": "passthru", 00:05:20.794 "block_size": 512, 00:05:20.794 "num_blocks": 16384, 00:05:20.794 "uuid": "e97a0b73-698e-5130-b524-cb063ca740d2", 00:05:20.794 "assigned_rate_limits": { 00:05:20.794 "rw_ios_per_sec": 0, 00:05:20.794 "rw_mbytes_per_sec": 0, 00:05:20.794 "r_mbytes_per_sec": 0, 00:05:20.794 "w_mbytes_per_sec": 0 00:05:20.794 }, 00:05:20.794 "claimed": false, 00:05:20.794 "zoned": false, 00:05:20.794 "supported_io_types": { 00:05:20.794 "read": true, 00:05:20.794 "write": true, 00:05:20.794 "unmap": true, 00:05:20.794 "flush": true, 00:05:20.794 "reset": true, 00:05:20.794 "nvme_admin": false, 00:05:20.794 "nvme_io": false, 00:05:20.794 "nvme_io_md": false, 00:05:20.794 "write_zeroes": true, 00:05:20.794 "zcopy": true, 00:05:20.794 "get_zone_info": false, 00:05:20.794 "zone_management": false, 00:05:20.794 "zone_append": false, 00:05:20.794 "compare": false, 00:05:20.794 "compare_and_write": false, 00:05:20.794 "abort": true, 00:05:20.794 "seek_hole": false, 00:05:20.794 "seek_data": false, 00:05:20.794 "copy": true, 00:05:20.794 "nvme_iov_md": false 00:05:20.794 }, 00:05:20.794 "memory_domains": [ 00:05:20.794 { 00:05:20.794 "dma_device_id": "system", 00:05:20.794 "dma_device_type": 1 00:05:20.794 }, 00:05:20.794 { 00:05:20.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.794 "dma_device_type": 2 00:05:20.794 } 00:05:20.794 ], 00:05:20.794 "driver_specific": { 00:05:20.794 "passthru": { 00:05:20.794 "name": "Passthru0", 00:05:20.794 "base_bdev_name": "Malloc2" 00:05:20.794 } 00:05:20.794 } 00:05:20.794 } 00:05:20.794 ]' 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.794 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.053 15:09:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.053 00:05:21.053 real 0m0.328s 00:05:21.053 user 0m0.183s 00:05:21.053 sys 0m0.055s 00:05:21.053 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.053 15:09:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.053 ************************************ 00:05:21.053 END TEST rpc_daemon_integrity 00:05:21.053 ************************************ 00:05:21.053 15:09:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.053 15:09:48 rpc -- rpc/rpc.sh@84 -- # killprocess 2937534 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@952 -- # '[' -z 2937534 ']' 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@956 -- # kill -0 2937534 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@957 -- # uname 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2937534 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2937534' 00:05:21.053 killing process with pid 2937534 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@971 -- # kill 2937534 00:05:21.053 15:09:48 rpc -- common/autotest_common.sh@976 -- # wait 2937534 00:05:23.587 00:05:23.587 real 0m5.011s 00:05:23.587 user 0m5.532s 00:05:23.587 sys 0m1.086s 00:05:23.587 15:09:50 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.587 15:09:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.587 ************************************ 00:05:23.587 END TEST rpc 00:05:23.587 ************************************ 00:05:23.587 15:09:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.587 15:09:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.587 15:09:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.587 15:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:23.587 ************************************ 00:05:23.587 START TEST skip_rpc 00:05:23.587 ************************************ 00:05:23.587 15:09:50 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.587 * Looking for test storage... 00:05:23.587 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.587 15:09:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.587 --rc genhtml_branch_coverage=1 00:05:23.587 --rc genhtml_function_coverage=1 00:05:23.587 --rc genhtml_legend=1 00:05:23.587 --rc geninfo_all_blocks=1 00:05:23.587 --rc geninfo_unexecuted_blocks=1 00:05:23.587 00:05:23.587 ' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.587 --rc genhtml_branch_coverage=1 00:05:23.587 --rc genhtml_function_coverage=1 00:05:23.587 --rc genhtml_legend=1 00:05:23.587 --rc geninfo_all_blocks=1 00:05:23.587 --rc geninfo_unexecuted_blocks=1 00:05:23.587 00:05:23.587 ' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.587 --rc genhtml_branch_coverage=1 00:05:23.587 --rc genhtml_function_coverage=1 00:05:23.587 --rc genhtml_legend=1 00:05:23.587 --rc geninfo_all_blocks=1 00:05:23.587 --rc geninfo_unexecuted_blocks=1 00:05:23.587 00:05:23.587 ' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.587 --rc genhtml_branch_coverage=1 00:05:23.587 --rc genhtml_function_coverage=1 00:05:23.587 --rc genhtml_legend=1 00:05:23.587 --rc geninfo_all_blocks=1 00:05:23.587 --rc geninfo_unexecuted_blocks=1 00:05:23.587 00:05:23.587 ' 00:05:23.587 15:09:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:23.587 15:09:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:23.587 15:09:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.587 15:09:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.587 ************************************ 00:05:23.587 START TEST skip_rpc 00:05:23.587 ************************************ 00:05:23.587 15:09:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:23.587 15:09:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2938326 00:05:23.587 15:09:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.587 15:09:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.587 15:09:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.847 [2024-11-06 15:09:51.229698] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:23.847 [2024-11-06 15:09:51.229795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938326 ] 00:05:23.847 [2024-11-06 15:09:51.377046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.106 [2024-11-06 15:09:51.484967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2938326 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2938326 ']' 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2938326 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2938326 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2938326' 00:05:29.498 killing process with pid 2938326 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2938326 00:05:29.498 15:09:56 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2938326 00:05:30.877 00:05:30.877 real 0m7.331s 00:05:30.877 user 0m6.891s 00:05:30.877 sys 0m0.465s 00:05:30.877 15:09:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.877 15:09:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.877 ************************************ 00:05:30.877 END TEST skip_rpc 00:05:30.877 ************************************ 00:05:30.877 15:09:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:30.877 15:09:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.877 15:09:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.877 15:09:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.136 ************************************ 00:05:31.136 START TEST skip_rpc_with_json 00:05:31.136 ************************************ 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2939445 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2939445 00:05:31.136 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2939445 ']' 00:05:31.137 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.137 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.137 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.137 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.137 15:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.137 [2024-11-06 15:09:58.651621] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:31.137 [2024-11-06 15:09:58.651729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939445 ] 00:05:31.396 [2024-11-06 15:09:58.797947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.396 [2024-11-06 15:09:58.904650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.333 [2024-11-06 15:09:59.672314] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.333 request: 00:05:32.333 { 00:05:32.333 "trtype": "tcp", 00:05:32.333 "method": "nvmf_get_transports", 00:05:32.333 "req_id": 1 00:05:32.333 } 00:05:32.333 Got JSON-RPC error response 00:05:32.333 response: 00:05:32.333 { 00:05:32.333 "code": -19, 00:05:32.333 "message": "No such device" 00:05:32.333 } 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.333 [2024-11-06 15:09:59.684408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.333 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:32.333 { 00:05:32.333 "subsystems": [ 00:05:32.333 { 00:05:32.333 "subsystem": "fsdev", 00:05:32.333 "config": [ 00:05:32.333 { 00:05:32.333 "method": "fsdev_set_opts", 00:05:32.333 "params": { 00:05:32.333 "fsdev_io_pool_size": 65535, 00:05:32.333 "fsdev_io_cache_size": 256 00:05:32.333 } 00:05:32.333 } 00:05:32.333 ] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "keyring", 00:05:32.333 "config": [] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "iobuf", 00:05:32.333 "config": [ 00:05:32.333 { 00:05:32.333 "method": "iobuf_set_options", 00:05:32.333 "params": { 00:05:32.333 "small_pool_count": 8192, 00:05:32.333 "large_pool_count": 1024, 00:05:32.333 "small_bufsize": 8192, 00:05:32.333 "large_bufsize": 135168, 00:05:32.333 "enable_numa": false 00:05:32.333 } 00:05:32.333 } 00:05:32.333 ] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "sock", 00:05:32.333 "config": [ 00:05:32.333 { 00:05:32.333 "method": "sock_set_default_impl", 00:05:32.333 "params": { 00:05:32.333 "impl_name": "posix" 00:05:32.333 } 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "method": "sock_impl_set_options", 00:05:32.333 "params": { 00:05:32.333 "impl_name": "ssl", 00:05:32.333 "recv_buf_size": 4096, 00:05:32.333 "send_buf_size": 4096, 00:05:32.333 "enable_recv_pipe": true, 00:05:32.333 "enable_quickack": false, 00:05:32.333 "enable_placement_id": 0, 00:05:32.333 "enable_zerocopy_send_server": true, 00:05:32.333 "enable_zerocopy_send_client": false, 00:05:32.333 "zerocopy_threshold": 0, 00:05:32.333 "tls_version": 0, 00:05:32.333 "enable_ktls": false 00:05:32.333 } 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "method": "sock_impl_set_options", 00:05:32.333 "params": { 00:05:32.333 "impl_name": "posix", 00:05:32.333 "recv_buf_size": 2097152, 00:05:32.333 "send_buf_size": 2097152, 00:05:32.333 "enable_recv_pipe": true, 00:05:32.333 "enable_quickack": false, 00:05:32.333 "enable_placement_id": 0, 00:05:32.333 "enable_zerocopy_send_server": true, 00:05:32.333 "enable_zerocopy_send_client": false, 00:05:32.333 "zerocopy_threshold": 0, 00:05:32.333 "tls_version": 0, 00:05:32.333 "enable_ktls": false 00:05:32.333 } 00:05:32.333 } 00:05:32.333 ] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "vmd", 00:05:32.333 "config": [] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "accel", 00:05:32.333 "config": [ 00:05:32.333 { 00:05:32.333 "method": "accel_set_options", 00:05:32.333 "params": { 00:05:32.333 "small_cache_size": 128, 00:05:32.333 "large_cache_size": 16, 00:05:32.333 "task_count": 2048, 00:05:32.333 "sequence_count": 2048, 00:05:32.333 "buf_count": 2048 00:05:32.333 } 00:05:32.333 } 00:05:32.333 ] 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "subsystem": "bdev", 00:05:32.333 "config": [ 00:05:32.333 { 00:05:32.333 "method": "bdev_set_options", 00:05:32.333 "params": { 00:05:32.333 "bdev_io_pool_size": 65535, 00:05:32.333 "bdev_io_cache_size": 256, 00:05:32.333 "bdev_auto_examine": true, 00:05:32.333 "iobuf_small_cache_size": 128, 00:05:32.333 "iobuf_large_cache_size": 16 00:05:32.333 } 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "method": "bdev_raid_set_options", 00:05:32.333 "params": { 00:05:32.333 "process_window_size_kb": 1024, 00:05:32.333 "process_max_bandwidth_mb_sec": 0 00:05:32.333 } 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "method": "bdev_iscsi_set_options", 00:05:32.333 "params": { 00:05:32.333 "timeout_sec": 30 00:05:32.333 } 00:05:32.333 }, 00:05:32.333 { 00:05:32.333 "method": "bdev_nvme_set_options", 00:05:32.333 "params": { 00:05:32.333 "action_on_timeout": "none", 00:05:32.333 "timeout_us": 0, 00:05:32.333 "timeout_admin_us": 0, 00:05:32.333 "keep_alive_timeout_ms": 10000, 00:05:32.333 "arbitration_burst": 0, 00:05:32.333 "low_priority_weight": 0, 00:05:32.333 "medium_priority_weight": 0, 00:05:32.333 "high_priority_weight": 0, 00:05:32.333 "nvme_adminq_poll_period_us": 10000, 00:05:32.333 "nvme_ioq_poll_period_us": 0, 00:05:32.333 "io_queue_requests": 0, 00:05:32.333 "delay_cmd_submit": true, 00:05:32.333 "transport_retry_count": 4, 00:05:32.333 "bdev_retry_count": 3, 00:05:32.333 "transport_ack_timeout": 0, 00:05:32.333 "ctrlr_loss_timeout_sec": 0, 00:05:32.333 "reconnect_delay_sec": 0, 00:05:32.333 "fast_io_fail_timeout_sec": 0, 00:05:32.333 "disable_auto_failback": false, 00:05:32.333 "generate_uuids": false, 00:05:32.333 "transport_tos": 0, 00:05:32.333 "nvme_error_stat": false, 00:05:32.333 "rdma_srq_size": 0, 00:05:32.333 "io_path_stat": false, 00:05:32.333 "allow_accel_sequence": false, 00:05:32.333 "rdma_max_cq_size": 0, 00:05:32.333 "rdma_cm_event_timeout_ms": 0, 00:05:32.333 "dhchap_digests": [ 00:05:32.333 "sha256", 00:05:32.333 "sha384", 00:05:32.333 "sha512" 00:05:32.333 ], 00:05:32.333 "dhchap_dhgroups": [ 00:05:32.333 "null", 00:05:32.333 "ffdhe2048", 00:05:32.334 "ffdhe3072", 00:05:32.334 "ffdhe4096", 00:05:32.334 "ffdhe6144", 00:05:32.334 "ffdhe8192" 00:05:32.334 ] 00:05:32.334 } 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "method": "bdev_nvme_set_hotplug", 00:05:32.334 "params": { 00:05:32.334 "period_us": 100000, 00:05:32.334 "enable": false 00:05:32.334 } 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "method": "bdev_wait_for_examine" 00:05:32.334 } 00:05:32.334 ] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "scsi", 00:05:32.334 "config": null 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "scheduler", 00:05:32.334 "config": [ 00:05:32.334 { 00:05:32.334 "method": "framework_set_scheduler", 00:05:32.334 "params": { 00:05:32.334 "name": "static" 00:05:32.334 } 00:05:32.334 } 00:05:32.334 ] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "vhost_scsi", 00:05:32.334 "config": [] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "vhost_blk", 00:05:32.334 "config": [] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "ublk", 00:05:32.334 "config": [] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "nbd", 00:05:32.334 "config": [] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "nvmf", 00:05:32.334 "config": [ 00:05:32.334 { 00:05:32.334 "method": "nvmf_set_config", 00:05:32.334 "params": { 00:05:32.334 "discovery_filter": "match_any", 00:05:32.334 "admin_cmd_passthru": { 00:05:32.334 "identify_ctrlr": false 00:05:32.334 }, 00:05:32.334 "dhchap_digests": [ 00:05:32.334 "sha256", 00:05:32.334 "sha384", 00:05:32.334 "sha512" 00:05:32.334 ], 00:05:32.334 "dhchap_dhgroups": [ 00:05:32.334 "null", 00:05:32.334 "ffdhe2048", 00:05:32.334 "ffdhe3072", 00:05:32.334 "ffdhe4096", 00:05:32.334 "ffdhe6144", 00:05:32.334 "ffdhe8192" 00:05:32.334 ] 00:05:32.334 } 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "method": "nvmf_set_max_subsystems", 00:05:32.334 "params": { 00:05:32.334 "max_subsystems": 1024 00:05:32.334 } 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "method": "nvmf_set_crdt", 00:05:32.334 "params": { 00:05:32.334 "crdt1": 0, 00:05:32.334 "crdt2": 0, 00:05:32.334 "crdt3": 0 00:05:32.334 } 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "method": "nvmf_create_transport", 00:05:32.334 "params": { 00:05:32.334 "trtype": "TCP", 00:05:32.334 "max_queue_depth": 128, 00:05:32.334 "max_io_qpairs_per_ctrlr": 127, 00:05:32.334 "in_capsule_data_size": 4096, 00:05:32.334 "max_io_size": 131072, 00:05:32.334 "io_unit_size": 131072, 00:05:32.334 "max_aq_depth": 128, 00:05:32.334 "num_shared_buffers": 511, 00:05:32.334 "buf_cache_size": 4294967295, 00:05:32.334 "dif_insert_or_strip": false, 00:05:32.334 "zcopy": false, 00:05:32.334 "c2h_success": true, 00:05:32.334 "sock_priority": 0, 00:05:32.334 "abort_timeout_sec": 1, 00:05:32.334 "ack_timeout": 0, 00:05:32.334 "data_wr_pool_size": 0 00:05:32.334 } 00:05:32.334 } 00:05:32.334 ] 00:05:32.334 }, 00:05:32.334 { 00:05:32.334 "subsystem": "iscsi", 00:05:32.334 "config": [ 00:05:32.334 { 00:05:32.334 "method": "iscsi_set_options", 00:05:32.334 "params": { 00:05:32.334 "node_base": "iqn.2016-06.io.spdk", 00:05:32.334 "max_sessions": 128, 00:05:32.334 "max_connections_per_session": 2, 00:05:32.334 "max_queue_depth": 64, 00:05:32.334 "default_time2wait": 2, 00:05:32.334 "default_time2retain": 20, 00:05:32.334 "first_burst_length": 8192, 00:05:32.334 "immediate_data": true, 00:05:32.334 "allow_duplicated_isid": false, 00:05:32.334 "error_recovery_level": 0, 00:05:32.334 "nop_timeout": 60, 00:05:32.334 "nop_in_interval": 30, 00:05:32.334 "disable_chap": false, 00:05:32.334 "require_chap": false, 00:05:32.334 "mutual_chap": false, 00:05:32.334 "chap_group": 0, 00:05:32.334 "max_large_datain_per_connection": 64, 00:05:32.334 "max_r2t_per_connection": 4, 00:05:32.334 "pdu_pool_size": 36864, 00:05:32.334 "immediate_data_pool_size": 16384, 00:05:32.334 "data_out_pool_size": 2048 00:05:32.334 } 00:05:32.334 } 00:05:32.334 ] 00:05:32.334 } 00:05:32.334 ] 00:05:32.334 } 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2939445 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2939445 ']' 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2939445 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2939445 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2939445' 00:05:32.334 killing process with pid 2939445 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2939445 00:05:32.334 15:09:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2939445 00:05:34.867 15:10:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2939831 00:05:34.867 15:10:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:34.867 15:10:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2939831 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2939831 ']' 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2939831 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2939831 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2939831' 00:05:40.138 killing process with pid 2939831 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2939831 00:05:40.138 15:10:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2939831 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.043 00:05:42.043 real 0m11.006s 00:05:42.043 user 0m10.412s 00:05:42.043 sys 0m1.094s 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.043 ************************************ 00:05:42.043 END TEST skip_rpc_with_json 00:05:42.043 ************************************ 00:05:42.043 15:10:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:42.043 15:10:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.043 15:10:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.043 15:10:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.043 ************************************ 00:05:42.043 START TEST skip_rpc_with_delay 00:05:42.043 ************************************ 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:42.043 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.302 [2024-11-06 15:10:09.730070] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.302 00:05:42.302 real 0m0.155s 00:05:42.302 user 0m0.078s 00:05:42.302 sys 0m0.077s 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.302 15:10:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 ************************************ 00:05:42.302 END TEST skip_rpc_with_delay 00:05:42.302 ************************************ 00:05:42.302 15:10:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:42.302 15:10:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:42.302 15:10:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:42.302 15:10:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.302 15:10:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.302 15:10:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.302 ************************************ 00:05:42.302 START TEST exit_on_failed_rpc_init 00:05:42.302 ************************************ 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2940971 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2940971 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2940971 ']' 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.302 15:10:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.560 [2024-11-06 15:10:09.983097] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:42.560 [2024-11-06 15:10:09.983220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940971 ] 00:05:42.560 [2024-11-06 15:10:10.134698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.819 [2024-11-06 15:10:10.242870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.756 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.756 [2024-11-06 15:10:11.140711] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:43.756 [2024-11-06 15:10:11.140818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941159 ] 00:05:43.756 [2024-11-06 15:10:11.287952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.014 [2024-11-06 15:10:11.398914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.014 [2024-11-06 15:10:11.399008] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.014 [2024-11-06 15:10:11.399031] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.014 [2024-11-06 15:10:11.399042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2940971 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2940971 ']' 00:05:44.014 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2940971 00:05:44.015 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2940971 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2940971' 00:05:44.273 killing process with pid 2940971 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2940971 00:05:44.273 15:10:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2940971 00:05:46.809 00:05:46.809 real 0m4.082s 00:05:46.809 user 0m4.384s 00:05:46.809 sys 0m0.733s 00:05:46.809 15:10:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.809 15:10:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 ************************************ 00:05:46.809 END TEST exit_on_failed_rpc_init 00:05:46.809 ************************************ 00:05:46.809 15:10:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:46.809 00:05:46.809 real 0m23.111s 00:05:46.809 user 0m22.003s 00:05:46.809 sys 0m2.711s 00:05:46.809 15:10:14 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.809 15:10:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 ************************************ 00:05:46.809 END TEST skip_rpc 00:05:46.809 ************************************ 00:05:46.809 15:10:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.809 15:10:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.809 15:10:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.809 15:10:14 -- common/autotest_common.sh@10 -- # set +x 00:05:46.809 ************************************ 00:05:46.809 START TEST rpc_client 00:05:46.809 ************************************ 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.809 * Looking for test storage... 00:05:46.809 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.809 15:10:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.809 --rc genhtml_branch_coverage=1 00:05:46.809 --rc genhtml_function_coverage=1 00:05:46.809 --rc genhtml_legend=1 00:05:46.809 --rc geninfo_all_blocks=1 00:05:46.809 --rc geninfo_unexecuted_blocks=1 00:05:46.809 00:05:46.809 ' 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.809 --rc genhtml_branch_coverage=1 00:05:46.809 --rc genhtml_function_coverage=1 00:05:46.809 --rc genhtml_legend=1 00:05:46.809 --rc geninfo_all_blocks=1 00:05:46.809 --rc geninfo_unexecuted_blocks=1 00:05:46.809 00:05:46.809 ' 00:05:46.809 15:10:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.809 --rc genhtml_branch_coverage=1 00:05:46.809 --rc genhtml_function_coverage=1 00:05:46.809 --rc genhtml_legend=1 00:05:46.809 --rc geninfo_all_blocks=1 00:05:46.809 --rc geninfo_unexecuted_blocks=1 00:05:46.809 00:05:46.809 ' 00:05:46.810 15:10:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.810 --rc genhtml_branch_coverage=1 00:05:46.810 --rc genhtml_function_coverage=1 00:05:46.810 --rc genhtml_legend=1 00:05:46.810 --rc geninfo_all_blocks=1 00:05:46.810 --rc geninfo_unexecuted_blocks=1 00:05:46.810 00:05:46.810 ' 00:05:46.810 15:10:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.810 OK 00:05:46.810 15:10:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.810 00:05:46.810 real 0m0.268s 00:05:46.810 user 0m0.136s 00:05:46.810 sys 0m0.150s 00:05:46.810 15:10:14 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.810 15:10:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.810 ************************************ 00:05:46.810 END TEST rpc_client 00:05:46.810 ************************************ 00:05:46.810 15:10:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.810 15:10:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.810 15:10:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.810 15:10:14 -- common/autotest_common.sh@10 -- # set +x 00:05:47.070 ************************************ 00:05:47.070 START TEST json_config 00:05:47.070 ************************************ 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.070 15:10:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.070 15:10:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.070 15:10:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.070 15:10:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.070 15:10:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.070 15:10:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:47.070 15:10:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.070 15:10:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.070 15:10:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.070 15:10:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.070 15:10:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.070 15:10:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.070 15:10:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.070 15:10:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.070 15:10:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.070 --rc genhtml_branch_coverage=1 00:05:47.070 --rc genhtml_function_coverage=1 00:05:47.070 --rc genhtml_legend=1 00:05:47.070 --rc geninfo_all_blocks=1 00:05:47.070 --rc geninfo_unexecuted_blocks=1 00:05:47.070 00:05:47.070 ' 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.070 --rc genhtml_branch_coverage=1 00:05:47.070 --rc genhtml_function_coverage=1 00:05:47.070 --rc genhtml_legend=1 00:05:47.070 --rc geninfo_all_blocks=1 00:05:47.070 --rc geninfo_unexecuted_blocks=1 00:05:47.070 00:05:47.070 ' 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.070 --rc genhtml_branch_coverage=1 00:05:47.070 --rc genhtml_function_coverage=1 00:05:47.070 --rc genhtml_legend=1 00:05:47.070 --rc geninfo_all_blocks=1 00:05:47.070 --rc geninfo_unexecuted_blocks=1 00:05:47.070 00:05:47.070 ' 00:05:47.070 15:10:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.070 --rc genhtml_branch_coverage=1 00:05:47.070 --rc genhtml_function_coverage=1 00:05:47.070 --rc genhtml_legend=1 00:05:47.070 --rc geninfo_all_blocks=1 00:05:47.070 --rc geninfo_unexecuted_blocks=1 00:05:47.070 00:05:47.070 ' 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:47.070 15:10:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.070 15:10:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.070 15:10:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.070 15:10:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.070 15:10:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.070 15:10:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.070 15:10:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.070 15:10:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:47.070 15:10:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.070 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.070 15:10:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:47.070 15:10:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:47.071 INFO: JSON configuration test init 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.071 15:10:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.071 15:10:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:47.071 15:10:14 json_config -- json_config/common.sh@10 -- # shift 00:05:47.071 15:10:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.071 15:10:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.071 15:10:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.071 15:10:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.071 15:10:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.071 15:10:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2941661 00:05:47.071 15:10:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.071 Waiting for target to run... 00:05:47.071 15:10:14 json_config -- json_config/common.sh@25 -- # waitforlisten 2941661 /var/tmp/spdk_tgt.sock 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 2941661 ']' 00:05:47.071 15:10:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.071 15:10:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.330 [2024-11-06 15:10:14.760175] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:47.330 [2024-11-06 15:10:14.760293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941661 ] 00:05:47.589 [2024-11-06 15:10:15.151387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.847 [2024-11-06 15:10:15.250257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:48.106 15:10:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.106 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.106 15:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.106 15:10:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:48.106 15:10:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:52.294 15:10:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@54 -- # sort 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:52.294 15:10:19 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:52.294 15:10:19 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:52.294 15:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:58.859 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:58.859 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:58.859 15:10:26 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:58.860 Found net devices under 0000:18:00.0: mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:58.860 Found net devices under 0000:18:00.1: mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@62 -- # uname 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:58.860 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:58.860 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:05:58.860 altname enp24s0f0np0 00:05:58.860 altname ens785f0np0 00:05:58.860 inet 192.168.100.8/24 scope global mlx_0_0 00:05:58.860 valid_lft forever preferred_lft forever 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:58.860 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:58.860 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:05:58.860 altname enp24s0f1np1 00:05:58.860 altname ens785f1np1 00:05:58.860 inet 192.168.100.9/24 scope global mlx_0_1 00:05:58.860 valid_lft forever preferred_lft forever 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@450 -- # return 0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:58.860 192.168.100.9' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:58.860 192.168.100.9' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:58.860 192.168.100.9' 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:58.860 15:10:26 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:58.861 15:10:26 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:58.861 15:10:26 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:58.861 15:10:26 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:58.861 15:10:26 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:58.861 15:10:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:58.861 15:10:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.861 15:10:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.119 MallocForNvmf0 00:05:59.119 15:10:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.119 15:10:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.378 MallocForNvmf1 00:05:59.379 15:10:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.379 15:10:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.637 [2024-11-06 15:10:27.060967] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:59.637 [2024-11-06 15:10:27.105599] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7ff9434ba940) succeed. 00:05:59.637 [2024-11-06 15:10:27.120161] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7ff943476940) succeed. 00:05:59.637 15:10:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.637 15:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.896 15:10:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.896 15:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:00.155 15:10:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.155 15:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.414 15:10:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:00.414 15:10:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:00.414 [2024-11-06 15:10:27.966819] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:00.414 15:10:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:00.414 15:10:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.414 15:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.414 15:10:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:00.414 15:10:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.414 15:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.672 15:10:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:00.672 15:10:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.672 15:10:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.672 MallocBdevForConfigChangeCheck 00:06:00.672 15:10:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:00.672 15:10:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.672 15:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.930 15:10:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:00.930 15:10:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.190 15:10:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:01.190 INFO: shutting down applications... 00:06:01.190 15:10:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:01.190 15:10:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:01.190 15:10:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:01.190 15:10:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.303 Calling clear_iscsi_subsystem 00:06:09.303 Calling clear_nvmf_subsystem 00:06:09.303 Calling clear_nbd_subsystem 00:06:09.303 Calling clear_ublk_subsystem 00:06:09.303 Calling clear_vhost_blk_subsystem 00:06:09.303 Calling clear_vhost_scsi_subsystem 00:06:09.303 Calling clear_bdev_subsystem 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.303 15:10:35 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:09.303 15:10:36 json_config -- json_config/json_config.sh@352 -- # break 00:06:09.303 15:10:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:09.303 15:10:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:09.303 15:10:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:09.303 15:10:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.303 15:10:36 json_config -- json_config/common.sh@35 -- # [[ -n 2941661 ]] 00:06:09.303 15:10:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2941661 00:06:09.303 15:10:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.303 15:10:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.303 15:10:36 json_config -- json_config/common.sh@41 -- # kill -0 2941661 00:06:09.303 15:10:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.303 15:10:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.303 15:10:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.304 15:10:36 json_config -- json_config/common.sh@41 -- # kill -0 2941661 00:06:09.304 15:10:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.563 15:10:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.563 15:10:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.563 15:10:37 json_config -- json_config/common.sh@41 -- # kill -0 2941661 00:06:09.563 15:10:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:09.563 15:10:37 json_config -- json_config/common.sh@43 -- # break 00:06:09.563 15:10:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:09.563 15:10:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:09.563 SPDK target shutdown done 00:06:09.563 15:10:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:09.563 INFO: relaunching applications... 00:06:09.563 15:10:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.563 15:10:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:09.563 15:10:37 json_config -- json_config/common.sh@10 -- # shift 00:06:09.563 15:10:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:09.563 15:10:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:09.563 15:10:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:09.563 15:10:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.563 15:10:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.563 15:10:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2946763 00:06:09.563 15:10:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:09.563 Waiting for target to run... 00:06:09.563 15:10:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.563 15:10:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2946763 /var/tmp/spdk_tgt.sock 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@833 -- # '[' -z 2946763 ']' 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.563 15:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.822 [2024-11-06 15:10:37.286857] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:09.822 [2024-11-06 15:10:37.286965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946763 ] 00:06:10.081 [2024-11-06 15:10:37.685134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.340 [2024-11-06 15:10:37.782370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.530 [2024-11-06 15:10:41.411302] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a1c0/0x7f0df7548940) succeed. 00:06:14.530 [2024-11-06 15:10:41.422982] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a340/0x7f0df68a6940) succeed. 00:06:14.530 [2024-11-06 15:10:41.485664] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:14.530 15:10:41 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.530 15:10:41 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:14.530 15:10:41 json_config -- json_config/common.sh@26 -- # echo '' 00:06:14.530 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.530 INFO: Checking if target configuration is the same... 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:14.530 15:10:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.530 + '[' 2 -ne 2 ']' 00:06:14.530 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.530 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:14.530 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:14.530 +++ basename /dev/fd/62 00:06:14.530 ++ mktemp /tmp/62.XXX 00:06:14.530 + tmp_file_1=/tmp/62.Iih 00:06:14.530 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.530 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.530 + tmp_file_2=/tmp/spdk_tgt_config.json.gvO 00:06:14.530 + ret=0 00:06:14.530 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.530 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.530 + diff -u /tmp/62.Iih /tmp/spdk_tgt_config.json.gvO 00:06:14.530 + echo 'INFO: JSON config files are the same' 00:06:14.530 INFO: JSON config files are the same 00:06:14.530 + rm /tmp/62.Iih /tmp/spdk_tgt_config.json.gvO 00:06:14.530 + exit 0 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:14.530 INFO: changing configuration and checking if this can be detected... 00:06:14.530 15:10:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.530 15:10:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.530 15:10:42 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.530 15:10:42 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:14.530 15:10:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.530 + '[' 2 -ne 2 ']' 00:06:14.530 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.530 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:14.530 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:14.530 +++ basename /dev/fd/62 00:06:14.530 ++ mktemp /tmp/62.XXX 00:06:14.530 + tmp_file_1=/tmp/62.Bas 00:06:14.530 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.530 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.530 + tmp_file_2=/tmp/spdk_tgt_config.json.TOM 00:06:14.530 + ret=0 00:06:14.530 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.100 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.100 + diff -u /tmp/62.Bas /tmp/spdk_tgt_config.json.TOM 00:06:15.100 + ret=1 00:06:15.100 + echo '=== Start of file: /tmp/62.Bas ===' 00:06:15.100 + cat /tmp/62.Bas 00:06:15.100 + echo '=== End of file: /tmp/62.Bas ===' 00:06:15.100 + echo '' 00:06:15.100 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TOM ===' 00:06:15.100 + cat /tmp/spdk_tgt_config.json.TOM 00:06:15.100 + echo '=== End of file: /tmp/spdk_tgt_config.json.TOM ===' 00:06:15.100 + echo '' 00:06:15.100 + rm /tmp/62.Bas /tmp/spdk_tgt_config.json.TOM 00:06:15.100 + exit 1 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:15.100 INFO: configuration change detected. 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 2946763 ]] 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 15:10:42 json_config -- json_config/json_config.sh@330 -- # killprocess 2946763 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@952 -- # '[' -z 2946763 ']' 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@956 -- # kill -0 2946763 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@957 -- # uname 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2946763 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2946763' 00:06:15.100 killing process with pid 2946763 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@971 -- # kill 2946763 00:06:15.100 15:10:42 json_config -- common/autotest_common.sh@976 -- # wait 2946763 00:06:23.317 15:10:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.317 15:10:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:23.317 15:10:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.317 15:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 15:10:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:23.317 15:10:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:23.317 INFO: Success 00:06:23.317 15:10:50 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@121 -- # sync 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.317 15:10:50 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:06:23.317 00:06:23.317 real 0m36.184s 00:06:23.317 user 0m38.641s 00:06:23.317 sys 0m8.379s 00:06:23.317 15:10:50 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.317 15:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 END TEST json_config 00:06:23.317 ************************************ 00:06:23.317 15:10:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.317 15:10:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.317 15:10:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.317 15:10:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 START TEST json_config_extra_key 00:06:23.317 ************************************ 00:06:23.317 15:10:50 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.317 15:10:50 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.317 15:10:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.317 15:10:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.317 15:10:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:23.318 15:10:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.318 15:10:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.318 --rc genhtml_branch_coverage=1 00:06:23.318 --rc genhtml_function_coverage=1 00:06:23.318 --rc genhtml_legend=1 00:06:23.318 --rc geninfo_all_blocks=1 00:06:23.318 --rc geninfo_unexecuted_blocks=1 00:06:23.318 00:06:23.318 ' 00:06:23.318 15:10:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.318 --rc genhtml_branch_coverage=1 00:06:23.318 --rc genhtml_function_coverage=1 00:06:23.318 --rc genhtml_legend=1 00:06:23.318 --rc geninfo_all_blocks=1 00:06:23.318 --rc geninfo_unexecuted_blocks=1 00:06:23.318 00:06:23.318 ' 00:06:23.318 15:10:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.318 --rc genhtml_branch_coverage=1 00:06:23.318 --rc genhtml_function_coverage=1 00:06:23.318 --rc genhtml_legend=1 00:06:23.318 --rc geninfo_all_blocks=1 00:06:23.318 --rc geninfo_unexecuted_blocks=1 00:06:23.318 00:06:23.318 ' 00:06:23.318 15:10:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.318 --rc genhtml_branch_coverage=1 00:06:23.318 --rc genhtml_function_coverage=1 00:06:23.318 --rc genhtml_legend=1 00:06:23.318 --rc geninfo_all_blocks=1 00:06:23.318 --rc geninfo_unexecuted_blocks=1 00:06:23.318 00:06:23.318 ' 00:06:23.318 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.318 15:10:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.318 15:10:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.318 15:10:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.318 15:10:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.318 15:10:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.318 15:10:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.318 15:10:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.318 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:23.318 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.318 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.318 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.319 INFO: launching applications... 00:06:23.319 15:10:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2948666 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.319 Waiting for target to run... 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2948666 /var/tmp/spdk_tgt.sock 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2948666 ']' 00:06:23.319 15:10:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.319 15:10:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.578 [2024-11-06 15:10:51.020034] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:23.578 [2024-11-06 15:10:51.020144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948666 ] 00:06:24.146 [2024-11-06 15:10:51.642305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.146 [2024-11-06 15:10:51.744343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.083 15:10:52 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.083 15:10:52 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:25.083 00:06:25.083 15:10:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:25.083 INFO: shutting down applications... 00:06:25.083 15:10:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2948666 ]] 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2948666 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:25.083 15:10:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.342 15:10:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.342 15:10:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.342 15:10:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:25.342 15:10:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.910 15:10:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.910 15:10:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.910 15:10:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:25.910 15:10:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.478 15:10:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.478 15:10:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.478 15:10:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:26.478 15:10:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.046 15:10:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.046 15:10:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.046 15:10:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:27.046 15:10:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.306 15:10:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.306 15:10:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.306 15:10:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:27.306 15:10:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2948666 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.876 15:10:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.876 SPDK target shutdown done 00:06:27.876 15:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.876 Success 00:06:27.876 00:06:27.876 real 0m4.684s 00:06:27.876 user 0m3.784s 00:06:27.876 sys 0m0.886s 00:06:27.876 15:10:55 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.876 15:10:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.876 ************************************ 00:06:27.876 END TEST json_config_extra_key 00:06:27.876 ************************************ 00:06:27.876 15:10:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.876 15:10:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.876 15:10:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.876 15:10:55 -- common/autotest_common.sh@10 -- # set +x 00:06:27.876 ************************************ 00:06:27.876 START TEST alias_rpc 00:06:27.876 ************************************ 00:06:27.876 15:10:55 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:28.136 * Looking for test storage... 00:06:28.136 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.136 15:10:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.136 --rc genhtml_branch_coverage=1 00:06:28.136 --rc genhtml_function_coverage=1 00:06:28.136 --rc genhtml_legend=1 00:06:28.136 --rc geninfo_all_blocks=1 00:06:28.136 --rc geninfo_unexecuted_blocks=1 00:06:28.136 00:06:28.136 ' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.136 --rc genhtml_branch_coverage=1 00:06:28.136 --rc genhtml_function_coverage=1 00:06:28.136 --rc genhtml_legend=1 00:06:28.136 --rc geninfo_all_blocks=1 00:06:28.136 --rc geninfo_unexecuted_blocks=1 00:06:28.136 00:06:28.136 ' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.136 --rc genhtml_branch_coverage=1 00:06:28.136 --rc genhtml_function_coverage=1 00:06:28.136 --rc genhtml_legend=1 00:06:28.136 --rc geninfo_all_blocks=1 00:06:28.136 --rc geninfo_unexecuted_blocks=1 00:06:28.136 00:06:28.136 ' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.136 --rc genhtml_branch_coverage=1 00:06:28.136 --rc genhtml_function_coverage=1 00:06:28.136 --rc genhtml_legend=1 00:06:28.136 --rc geninfo_all_blocks=1 00:06:28.136 --rc geninfo_unexecuted_blocks=1 00:06:28.136 00:06:28.136 ' 00:06:28.136 15:10:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.136 15:10:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2949379 00:06:28.136 15:10:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.136 15:10:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2949379 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2949379 ']' 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.136 15:10:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.136 [2024-11-06 15:10:55.765836] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:28.136 [2024-11-06 15:10:55.765945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949379 ] 00:06:28.395 [2024-11-06 15:10:55.910264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.395 [2024-11-06 15:10:56.012887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.334 15:10:56 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.334 15:10:56 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:29.334 15:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:29.593 15:10:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2949379 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2949379 ']' 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2949379 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2949379 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2949379' 00:06:29.593 killing process with pid 2949379 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@971 -- # kill 2949379 00:06:29.593 15:10:57 alias_rpc -- common/autotest_common.sh@976 -- # wait 2949379 00:06:32.129 00:06:32.129 real 0m3.862s 00:06:32.129 user 0m3.816s 00:06:32.129 sys 0m0.691s 00:06:32.130 15:10:59 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.130 15:10:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.130 ************************************ 00:06:32.130 END TEST alias_rpc 00:06:32.130 ************************************ 00:06:32.130 15:10:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:32.130 15:10:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.130 15:10:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.130 15:10:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.130 15:10:59 -- common/autotest_common.sh@10 -- # set +x 00:06:32.130 ************************************ 00:06:32.130 START TEST spdkcli_tcp 00:06:32.130 ************************************ 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:32.130 * Looking for test storage... 00:06:32.130 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.130 15:10:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.130 --rc genhtml_branch_coverage=1 00:06:32.130 --rc genhtml_function_coverage=1 00:06:32.130 --rc genhtml_legend=1 00:06:32.130 --rc geninfo_all_blocks=1 00:06:32.130 --rc geninfo_unexecuted_blocks=1 00:06:32.130 00:06:32.130 ' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.130 --rc genhtml_branch_coverage=1 00:06:32.130 --rc genhtml_function_coverage=1 00:06:32.130 --rc genhtml_legend=1 00:06:32.130 --rc geninfo_all_blocks=1 00:06:32.130 --rc geninfo_unexecuted_blocks=1 00:06:32.130 00:06:32.130 ' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.130 --rc genhtml_branch_coverage=1 00:06:32.130 --rc genhtml_function_coverage=1 00:06:32.130 --rc genhtml_legend=1 00:06:32.130 --rc geninfo_all_blocks=1 00:06:32.130 --rc geninfo_unexecuted_blocks=1 00:06:32.130 00:06:32.130 ' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.130 --rc genhtml_branch_coverage=1 00:06:32.130 --rc genhtml_function_coverage=1 00:06:32.130 --rc genhtml_legend=1 00:06:32.130 --rc geninfo_all_blocks=1 00:06:32.130 --rc geninfo_unexecuted_blocks=1 00:06:32.130 00:06:32.130 ' 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2949988 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:32.130 15:10:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2949988 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2949988 ']' 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.130 15:10:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.130 [2024-11-06 15:10:59.724695] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:32.130 [2024-11-06 15:10:59.724797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949988 ] 00:06:32.390 [2024-11-06 15:10:59.872760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.390 [2024-11-06 15:10:59.981552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.390 [2024-11-06 15:10:59.981579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.324 15:11:00 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.324 15:11:00 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:33.324 15:11:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2950171 00:06:33.324 15:11:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:33.324 15:11:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:33.583 [ 00:06:33.583 "bdev_malloc_delete", 00:06:33.583 "bdev_malloc_create", 00:06:33.583 "bdev_null_resize", 00:06:33.583 "bdev_null_delete", 00:06:33.583 "bdev_null_create", 00:06:33.583 "bdev_nvme_cuse_unregister", 00:06:33.583 "bdev_nvme_cuse_register", 00:06:33.583 "bdev_opal_new_user", 00:06:33.583 "bdev_opal_set_lock_state", 00:06:33.583 "bdev_opal_delete", 00:06:33.583 "bdev_opal_get_info", 00:06:33.583 "bdev_opal_create", 00:06:33.583 "bdev_nvme_opal_revert", 00:06:33.583 "bdev_nvme_opal_init", 00:06:33.583 "bdev_nvme_send_cmd", 00:06:33.583 "bdev_nvme_set_keys", 00:06:33.583 "bdev_nvme_get_path_iostat", 00:06:33.583 "bdev_nvme_get_mdns_discovery_info", 00:06:33.583 "bdev_nvme_stop_mdns_discovery", 00:06:33.583 "bdev_nvme_start_mdns_discovery", 00:06:33.583 "bdev_nvme_set_multipath_policy", 00:06:33.583 "bdev_nvme_set_preferred_path", 00:06:33.583 "bdev_nvme_get_io_paths", 00:06:33.583 "bdev_nvme_remove_error_injection", 00:06:33.583 "bdev_nvme_add_error_injection", 00:06:33.583 "bdev_nvme_get_discovery_info", 00:06:33.583 "bdev_nvme_stop_discovery", 00:06:33.583 "bdev_nvme_start_discovery", 00:06:33.583 "bdev_nvme_get_controller_health_info", 00:06:33.583 "bdev_nvme_disable_controller", 00:06:33.583 "bdev_nvme_enable_controller", 00:06:33.583 "bdev_nvme_reset_controller", 00:06:33.583 "bdev_nvme_get_transport_statistics", 00:06:33.583 "bdev_nvme_apply_firmware", 00:06:33.583 "bdev_nvme_detach_controller", 00:06:33.583 "bdev_nvme_get_controllers", 00:06:33.583 "bdev_nvme_attach_controller", 00:06:33.583 "bdev_nvme_set_hotplug", 00:06:33.584 "bdev_nvme_set_options", 00:06:33.584 "bdev_passthru_delete", 00:06:33.584 "bdev_passthru_create", 00:06:33.584 "bdev_lvol_set_parent_bdev", 00:06:33.584 "bdev_lvol_set_parent", 00:06:33.584 "bdev_lvol_check_shallow_copy", 00:06:33.584 "bdev_lvol_start_shallow_copy", 00:06:33.584 "bdev_lvol_grow_lvstore", 00:06:33.584 "bdev_lvol_get_lvols", 00:06:33.584 "bdev_lvol_get_lvstores", 00:06:33.584 "bdev_lvol_delete", 00:06:33.584 "bdev_lvol_set_read_only", 00:06:33.584 "bdev_lvol_resize", 00:06:33.584 "bdev_lvol_decouple_parent", 00:06:33.584 "bdev_lvol_inflate", 00:06:33.584 "bdev_lvol_rename", 00:06:33.584 "bdev_lvol_clone_bdev", 00:06:33.584 "bdev_lvol_clone", 00:06:33.584 "bdev_lvol_snapshot", 00:06:33.584 "bdev_lvol_create", 00:06:33.584 "bdev_lvol_delete_lvstore", 00:06:33.584 "bdev_lvol_rename_lvstore", 00:06:33.584 "bdev_lvol_create_lvstore", 00:06:33.584 "bdev_raid_set_options", 00:06:33.584 "bdev_raid_remove_base_bdev", 00:06:33.584 "bdev_raid_add_base_bdev", 00:06:33.584 "bdev_raid_delete", 00:06:33.584 "bdev_raid_create", 00:06:33.584 "bdev_raid_get_bdevs", 00:06:33.584 "bdev_error_inject_error", 00:06:33.584 "bdev_error_delete", 00:06:33.584 "bdev_error_create", 00:06:33.584 "bdev_split_delete", 00:06:33.584 "bdev_split_create", 00:06:33.584 "bdev_delay_delete", 00:06:33.584 "bdev_delay_create", 00:06:33.584 "bdev_delay_update_latency", 00:06:33.584 "bdev_zone_block_delete", 00:06:33.584 "bdev_zone_block_create", 00:06:33.584 "blobfs_create", 00:06:33.584 "blobfs_detect", 00:06:33.584 "blobfs_set_cache_size", 00:06:33.584 "bdev_aio_delete", 00:06:33.584 "bdev_aio_rescan", 00:06:33.584 "bdev_aio_create", 00:06:33.584 "bdev_ftl_set_property", 00:06:33.584 "bdev_ftl_get_properties", 00:06:33.584 "bdev_ftl_get_stats", 00:06:33.584 "bdev_ftl_unmap", 00:06:33.584 "bdev_ftl_unload", 00:06:33.584 "bdev_ftl_delete", 00:06:33.584 "bdev_ftl_load", 00:06:33.584 "bdev_ftl_create", 00:06:33.584 "bdev_virtio_attach_controller", 00:06:33.584 "bdev_virtio_scsi_get_devices", 00:06:33.584 "bdev_virtio_detach_controller", 00:06:33.584 "bdev_virtio_blk_set_hotplug", 00:06:33.584 "bdev_iscsi_delete", 00:06:33.584 "bdev_iscsi_create", 00:06:33.584 "bdev_iscsi_set_options", 00:06:33.584 "accel_error_inject_error", 00:06:33.584 "ioat_scan_accel_module", 00:06:33.584 "dsa_scan_accel_module", 00:06:33.584 "iaa_scan_accel_module", 00:06:33.584 "keyring_file_remove_key", 00:06:33.584 "keyring_file_add_key", 00:06:33.584 "keyring_linux_set_options", 00:06:33.584 "fsdev_aio_delete", 00:06:33.584 "fsdev_aio_create", 00:06:33.584 "iscsi_get_histogram", 00:06:33.584 "iscsi_enable_histogram", 00:06:33.584 "iscsi_set_options", 00:06:33.584 "iscsi_get_auth_groups", 00:06:33.584 "iscsi_auth_group_remove_secret", 00:06:33.584 "iscsi_auth_group_add_secret", 00:06:33.584 "iscsi_delete_auth_group", 00:06:33.584 "iscsi_create_auth_group", 00:06:33.584 "iscsi_set_discovery_auth", 00:06:33.584 "iscsi_get_options", 00:06:33.584 "iscsi_target_node_request_logout", 00:06:33.584 "iscsi_target_node_set_redirect", 00:06:33.584 "iscsi_target_node_set_auth", 00:06:33.584 "iscsi_target_node_add_lun", 00:06:33.584 "iscsi_get_stats", 00:06:33.584 "iscsi_get_connections", 00:06:33.584 "iscsi_portal_group_set_auth", 00:06:33.584 "iscsi_start_portal_group", 00:06:33.584 "iscsi_delete_portal_group", 00:06:33.584 "iscsi_create_portal_group", 00:06:33.584 "iscsi_get_portal_groups", 00:06:33.584 "iscsi_delete_target_node", 00:06:33.584 "iscsi_target_node_remove_pg_ig_maps", 00:06:33.584 "iscsi_target_node_add_pg_ig_maps", 00:06:33.584 "iscsi_create_target_node", 00:06:33.584 "iscsi_get_target_nodes", 00:06:33.584 "iscsi_delete_initiator_group", 00:06:33.584 "iscsi_initiator_group_remove_initiators", 00:06:33.584 "iscsi_initiator_group_add_initiators", 00:06:33.584 "iscsi_create_initiator_group", 00:06:33.584 "iscsi_get_initiator_groups", 00:06:33.584 "nvmf_set_crdt", 00:06:33.584 "nvmf_set_config", 00:06:33.584 "nvmf_set_max_subsystems", 00:06:33.584 "nvmf_stop_mdns_prr", 00:06:33.584 "nvmf_publish_mdns_prr", 00:06:33.584 "nvmf_subsystem_get_listeners", 00:06:33.584 "nvmf_subsystem_get_qpairs", 00:06:33.584 "nvmf_subsystem_get_controllers", 00:06:33.584 "nvmf_get_stats", 00:06:33.584 "nvmf_get_transports", 00:06:33.584 "nvmf_create_transport", 00:06:33.584 "nvmf_get_targets", 00:06:33.584 "nvmf_delete_target", 00:06:33.584 "nvmf_create_target", 00:06:33.584 "nvmf_subsystem_allow_any_host", 00:06:33.584 "nvmf_subsystem_set_keys", 00:06:33.584 "nvmf_subsystem_remove_host", 00:06:33.584 "nvmf_subsystem_add_host", 00:06:33.584 "nvmf_ns_remove_host", 00:06:33.584 "nvmf_ns_add_host", 00:06:33.584 "nvmf_subsystem_remove_ns", 00:06:33.584 "nvmf_subsystem_set_ns_ana_group", 00:06:33.584 "nvmf_subsystem_add_ns", 00:06:33.584 "nvmf_subsystem_listener_set_ana_state", 00:06:33.584 "nvmf_discovery_get_referrals", 00:06:33.584 "nvmf_discovery_remove_referral", 00:06:33.584 "nvmf_discovery_add_referral", 00:06:33.584 "nvmf_subsystem_remove_listener", 00:06:33.584 "nvmf_subsystem_add_listener", 00:06:33.584 "nvmf_delete_subsystem", 00:06:33.584 "nvmf_create_subsystem", 00:06:33.584 "nvmf_get_subsystems", 00:06:33.584 "env_dpdk_get_mem_stats", 00:06:33.584 "nbd_get_disks", 00:06:33.584 "nbd_stop_disk", 00:06:33.584 "nbd_start_disk", 00:06:33.584 "ublk_recover_disk", 00:06:33.584 "ublk_get_disks", 00:06:33.584 "ublk_stop_disk", 00:06:33.584 "ublk_start_disk", 00:06:33.584 "ublk_destroy_target", 00:06:33.584 "ublk_create_target", 00:06:33.584 "virtio_blk_create_transport", 00:06:33.584 "virtio_blk_get_transports", 00:06:33.584 "vhost_controller_set_coalescing", 00:06:33.584 "vhost_get_controllers", 00:06:33.584 "vhost_delete_controller", 00:06:33.584 "vhost_create_blk_controller", 00:06:33.584 "vhost_scsi_controller_remove_target", 00:06:33.584 "vhost_scsi_controller_add_target", 00:06:33.584 "vhost_start_scsi_controller", 00:06:33.584 "vhost_create_scsi_controller", 00:06:33.584 "thread_set_cpumask", 00:06:33.584 "scheduler_set_options", 00:06:33.584 "framework_get_governor", 00:06:33.584 "framework_get_scheduler", 00:06:33.584 "framework_set_scheduler", 00:06:33.584 "framework_get_reactors", 00:06:33.584 "thread_get_io_channels", 00:06:33.584 "thread_get_pollers", 00:06:33.584 "thread_get_stats", 00:06:33.584 "framework_monitor_context_switch", 00:06:33.584 "spdk_kill_instance", 00:06:33.584 "log_enable_timestamps", 00:06:33.584 "log_get_flags", 00:06:33.584 "log_clear_flag", 00:06:33.584 "log_set_flag", 00:06:33.584 "log_get_level", 00:06:33.584 "log_set_level", 00:06:33.584 "log_get_print_level", 00:06:33.584 "log_set_print_level", 00:06:33.584 "framework_enable_cpumask_locks", 00:06:33.584 "framework_disable_cpumask_locks", 00:06:33.584 "framework_wait_init", 00:06:33.584 "framework_start_init", 00:06:33.584 "scsi_get_devices", 00:06:33.584 "bdev_get_histogram", 00:06:33.584 "bdev_enable_histogram", 00:06:33.584 "bdev_set_qos_limit", 00:06:33.584 "bdev_set_qd_sampling_period", 00:06:33.584 "bdev_get_bdevs", 00:06:33.584 "bdev_reset_iostat", 00:06:33.584 "bdev_get_iostat", 00:06:33.584 "bdev_examine", 00:06:33.584 "bdev_wait_for_examine", 00:06:33.584 "bdev_set_options", 00:06:33.584 "accel_get_stats", 00:06:33.584 "accel_set_options", 00:06:33.584 "accel_set_driver", 00:06:33.584 "accel_crypto_key_destroy", 00:06:33.584 "accel_crypto_keys_get", 00:06:33.584 "accel_crypto_key_create", 00:06:33.584 "accel_assign_opc", 00:06:33.584 "accel_get_module_info", 00:06:33.584 "accel_get_opc_assignments", 00:06:33.584 "vmd_rescan", 00:06:33.584 "vmd_remove_device", 00:06:33.584 "vmd_enable", 00:06:33.584 "sock_get_default_impl", 00:06:33.584 "sock_set_default_impl", 00:06:33.584 "sock_impl_set_options", 00:06:33.584 "sock_impl_get_options", 00:06:33.584 "iobuf_get_stats", 00:06:33.584 "iobuf_set_options", 00:06:33.584 "keyring_get_keys", 00:06:33.584 "framework_get_pci_devices", 00:06:33.584 "framework_get_config", 00:06:33.584 "framework_get_subsystems", 00:06:33.584 "fsdev_set_opts", 00:06:33.584 "fsdev_get_opts", 00:06:33.584 "trace_get_info", 00:06:33.584 "trace_get_tpoint_group_mask", 00:06:33.584 "trace_disable_tpoint_group", 00:06:33.584 "trace_enable_tpoint_group", 00:06:33.584 "trace_clear_tpoint_mask", 00:06:33.584 "trace_set_tpoint_mask", 00:06:33.584 "notify_get_notifications", 00:06:33.584 "notify_get_types", 00:06:33.584 "spdk_get_version", 00:06:33.584 "rpc_get_methods" 00:06:33.584 ] 00:06:33.584 15:11:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:33.584 15:11:00 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.584 15:11:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.584 15:11:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:33.584 15:11:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2949988 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2949988 ']' 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2949988 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2949988 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.584 15:11:01 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.585 15:11:01 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2949988' 00:06:33.585 killing process with pid 2949988 00:06:33.585 15:11:01 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2949988 00:06:33.585 15:11:01 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2949988 00:06:36.246 00:06:36.246 real 0m4.004s 00:06:36.246 user 0m7.161s 00:06:36.246 sys 0m0.722s 00:06:36.246 15:11:03 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.246 15:11:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.246 ************************************ 00:06:36.246 END TEST spdkcli_tcp 00:06:36.246 ************************************ 00:06:36.246 15:11:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.246 15:11:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.246 15:11:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.246 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:06:36.246 ************************************ 00:06:36.246 START TEST dpdk_mem_utility 00:06:36.246 ************************************ 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.246 * Looking for test storage... 00:06:36.246 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.246 15:11:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.246 --rc genhtml_branch_coverage=1 00:06:36.246 --rc genhtml_function_coverage=1 00:06:36.246 --rc genhtml_legend=1 00:06:36.246 --rc geninfo_all_blocks=1 00:06:36.246 --rc geninfo_unexecuted_blocks=1 00:06:36.246 00:06:36.246 ' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.246 --rc genhtml_branch_coverage=1 00:06:36.246 --rc genhtml_function_coverage=1 00:06:36.246 --rc genhtml_legend=1 00:06:36.246 --rc geninfo_all_blocks=1 00:06:36.246 --rc geninfo_unexecuted_blocks=1 00:06:36.246 00:06:36.246 ' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.246 --rc genhtml_branch_coverage=1 00:06:36.246 --rc genhtml_function_coverage=1 00:06:36.246 --rc genhtml_legend=1 00:06:36.246 --rc geninfo_all_blocks=1 00:06:36.246 --rc geninfo_unexecuted_blocks=1 00:06:36.246 00:06:36.246 ' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.246 --rc genhtml_branch_coverage=1 00:06:36.246 --rc genhtml_function_coverage=1 00:06:36.246 --rc genhtml_legend=1 00:06:36.246 --rc geninfo_all_blocks=1 00:06:36.246 --rc geninfo_unexecuted_blocks=1 00:06:36.246 00:06:36.246 ' 00:06:36.246 15:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.246 15:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2950608 00:06:36.246 15:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2950608 00:06:36.246 15:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2950608 ']' 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.246 15:11:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.246 [2024-11-06 15:11:03.800082] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:36.246 [2024-11-06 15:11:03.800218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950608 ] 00:06:36.506 [2024-11-06 15:11:03.968192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.506 [2024-11-06 15:11:04.073232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.444 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.444 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:37.444 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:37.444 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:37.444 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.444 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.444 { 00:06:37.444 "filename": "/tmp/spdk_mem_dump.txt" 00:06:37.444 } 00:06:37.444 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.444 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.444 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:37.444 1 heaps totaling size 816.000000 MiB 00:06:37.444 size: 816.000000 MiB heap id: 0 00:06:37.444 end heaps---------- 00:06:37.444 9 mempools totaling size 595.772034 MiB 00:06:37.444 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:37.444 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:37.444 size: 92.545471 MiB name: bdev_io_2950608 00:06:37.444 size: 50.003479 MiB name: msgpool_2950608 00:06:37.444 size: 36.509338 MiB name: fsdev_io_2950608 00:06:37.444 size: 21.763794 MiB name: PDU_Pool 00:06:37.444 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:37.444 size: 4.133484 MiB name: evtpool_2950608 00:06:37.444 size: 0.026123 MiB name: Session_Pool 00:06:37.444 end mempools------- 00:06:37.444 6 memzones totaling size 4.142822 MiB 00:06:37.444 size: 1.000366 MiB name: RG_ring_0_2950608 00:06:37.444 size: 1.000366 MiB name: RG_ring_1_2950608 00:06:37.444 size: 1.000366 MiB name: RG_ring_4_2950608 00:06:37.444 size: 1.000366 MiB name: RG_ring_5_2950608 00:06:37.444 size: 0.125366 MiB name: RG_ring_2_2950608 00:06:37.444 size: 0.015991 MiB name: RG_ring_3_2950608 00:06:37.444 end memzones------- 00:06:37.444 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:37.444 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:06:37.444 list of free elements. size: 16.857605 MiB 00:06:37.444 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:37.444 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:37.444 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:37.444 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:37.444 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:37.444 element at address: 0x200019200000 with size: 0.999329 MiB 00:06:37.444 element at address: 0x200000400000 with size: 0.998108 MiB 00:06:37.444 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:37.444 element at address: 0x200018a00000 with size: 0.959900 MiB 00:06:37.444 element at address: 0x200019500040 with size: 0.937256 MiB 00:06:37.444 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:37.444 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:06:37.444 element at address: 0x200000c00000 with size: 0.495300 MiB 00:06:37.444 element at address: 0x200018e00000 with size: 0.491150 MiB 00:06:37.444 element at address: 0x200019600000 with size: 0.485657 MiB 00:06:37.444 element at address: 0x200012c00000 with size: 0.446167 MiB 00:06:37.444 element at address: 0x200028000000 with size: 0.411072 MiB 00:06:37.444 element at address: 0x200000800000 with size: 0.355286 MiB 00:06:37.444 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:06:37.444 list of standard malloc elements. size: 199.221497 MiB 00:06:37.444 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:37.444 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:37.444 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:37.444 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:37.444 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:37.444 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:37.444 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:37.444 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:37.444 element at address: 0x200012bff040 with size: 0.000427 MiB 00:06:37.444 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:06:37.444 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:37.444 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff200 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff300 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff400 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff500 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff600 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff700 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff800 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bff900 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:37.444 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:37.445 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:37.445 list of memzone associated elements. size: 599.920898 MiB 00:06:37.445 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:37.445 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:37.445 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:37.445 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:37.445 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:37.445 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2950608_0 00:06:37.445 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:37.445 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2950608_0 00:06:37.445 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:37.445 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2950608_0 00:06:37.445 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:37.445 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:37.445 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:37.445 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:37.445 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:37.445 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2950608_0 00:06:37.445 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:37.445 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2950608 00:06:37.445 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:37.445 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2950608 00:06:37.445 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:37.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:37.445 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:37.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:37.445 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:37.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:37.445 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:37.445 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:37.445 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:37.445 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2950608 00:06:37.445 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:37.445 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2950608 00:06:37.445 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:37.445 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2950608 00:06:37.445 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:37.445 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2950608 00:06:37.445 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:37.445 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2950608 00:06:37.445 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:37.445 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2950608 00:06:37.445 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:06:37.445 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:37.445 element at address: 0x200012c72380 with size: 0.500549 MiB 00:06:37.445 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:37.445 element at address: 0x20001967c540 with size: 0.250549 MiB 00:06:37.445 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:37.445 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:37.445 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2950608 00:06:37.445 element at address: 0x20000085f180 with size: 0.125549 MiB 00:06:37.445 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2950608 00:06:37.445 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:06:37.445 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:37.445 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:06:37.445 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:37.445 element at address: 0x20000085af40 with size: 0.016174 MiB 00:06:37.445 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2950608 00:06:37.445 element at address: 0x20002806f540 with size: 0.002502 MiB 00:06:37.445 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:37.445 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:06:37.445 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2950608 00:06:37.445 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:37.445 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2950608 00:06:37.445 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:37.445 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2950608 00:06:37.445 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:06:37.445 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:37.445 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:37.445 15:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2950608 00:06:37.445 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2950608 ']' 00:06:37.445 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2950608 00:06:37.445 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:37.445 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.445 15:11:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2950608 00:06:37.445 15:11:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.445 15:11:05 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.445 15:11:05 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2950608' 00:06:37.445 killing process with pid 2950608 00:06:37.445 15:11:05 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2950608 00:06:37.445 15:11:05 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2950608 00:06:39.982 00:06:39.982 real 0m3.758s 00:06:39.982 user 0m3.615s 00:06:39.982 sys 0m0.698s 00:06:39.982 15:11:07 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.982 15:11:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.982 ************************************ 00:06:39.982 END TEST dpdk_mem_utility 00:06:39.982 ************************************ 00:06:39.982 15:11:07 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:39.982 15:11:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.982 15:11:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.982 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:39.982 ************************************ 00:06:39.982 START TEST event 00:06:39.982 ************************************ 00:06:39.982 15:11:07 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:39.982 * Looking for test storage... 00:06:39.982 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:39.982 15:11:07 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.982 15:11:07 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.982 15:11:07 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.982 15:11:07 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.982 15:11:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.982 15:11:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.982 15:11:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.982 15:11:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.982 15:11:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.982 15:11:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.982 15:11:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.983 15:11:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.983 15:11:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.983 15:11:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.983 15:11:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.983 15:11:07 event -- scripts/common.sh@344 -- # case "$op" in 00:06:39.983 15:11:07 event -- scripts/common.sh@345 -- # : 1 00:06:39.983 15:11:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.983 15:11:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.983 15:11:07 event -- scripts/common.sh@365 -- # decimal 1 00:06:39.983 15:11:07 event -- scripts/common.sh@353 -- # local d=1 00:06:39.983 15:11:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.983 15:11:07 event -- scripts/common.sh@355 -- # echo 1 00:06:39.983 15:11:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.983 15:11:07 event -- scripts/common.sh@366 -- # decimal 2 00:06:39.983 15:11:07 event -- scripts/common.sh@353 -- # local d=2 00:06:39.983 15:11:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.983 15:11:07 event -- scripts/common.sh@355 -- # echo 2 00:06:39.983 15:11:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.983 15:11:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.983 15:11:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.983 15:11:07 event -- scripts/common.sh@368 -- # return 0 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.983 --rc genhtml_branch_coverage=1 00:06:39.983 --rc genhtml_function_coverage=1 00:06:39.983 --rc genhtml_legend=1 00:06:39.983 --rc geninfo_all_blocks=1 00:06:39.983 --rc geninfo_unexecuted_blocks=1 00:06:39.983 00:06:39.983 ' 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.983 --rc genhtml_branch_coverage=1 00:06:39.983 --rc genhtml_function_coverage=1 00:06:39.983 --rc genhtml_legend=1 00:06:39.983 --rc geninfo_all_blocks=1 00:06:39.983 --rc geninfo_unexecuted_blocks=1 00:06:39.983 00:06:39.983 ' 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.983 --rc genhtml_branch_coverage=1 00:06:39.983 --rc genhtml_function_coverage=1 00:06:39.983 --rc genhtml_legend=1 00:06:39.983 --rc geninfo_all_blocks=1 00:06:39.983 --rc geninfo_unexecuted_blocks=1 00:06:39.983 00:06:39.983 ' 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.983 --rc genhtml_branch_coverage=1 00:06:39.983 --rc genhtml_function_coverage=1 00:06:39.983 --rc genhtml_legend=1 00:06:39.983 --rc geninfo_all_blocks=1 00:06:39.983 --rc geninfo_unexecuted_blocks=1 00:06:39.983 00:06:39.983 ' 00:06:39.983 15:11:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:39.983 15:11:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:39.983 15:11:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:39.983 15:11:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.983 15:11:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.983 ************************************ 00:06:39.983 START TEST event_perf 00:06:39.983 ************************************ 00:06:39.983 15:11:07 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:40.242 Running I/O for 1 seconds...[2024-11-06 15:11:07.637118] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:40.242 [2024-11-06 15:11:07.637216] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951192 ] 00:06:40.242 [2024-11-06 15:11:07.784520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.501 [2024-11-06 15:11:07.898001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.501 [2024-11-06 15:11:07.898081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.501 [2024-11-06 15:11:07.898161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.501 [2024-11-06 15:11:07.898190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.878 Running I/O for 1 seconds... 00:06:41.878 lcore 0: 207865 00:06:41.878 lcore 1: 207864 00:06:41.878 lcore 2: 207866 00:06:41.878 lcore 3: 207865 00:06:41.878 done. 00:06:41.878 00:06:41.878 real 0m1.541s 00:06:41.878 user 0m4.359s 00:06:41.878 sys 0m0.178s 00:06:41.878 15:11:09 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.878 15:11:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 ************************************ 00:06:41.878 END TEST event_perf 00:06:41.878 ************************************ 00:06:41.878 15:11:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:41.878 15:11:09 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:41.878 15:11:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.878 15:11:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.878 ************************************ 00:06:41.878 START TEST event_reactor 00:06:41.878 ************************************ 00:06:41.878 15:11:09 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:41.878 [2024-11-06 15:11:09.265868] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:41.878 [2024-11-06 15:11:09.265952] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951430 ] 00:06:41.878 [2024-11-06 15:11:09.410280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.135 [2024-11-06 15:11:09.515645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.512 test_start 00:06:43.512 oneshot 00:06:43.512 tick 100 00:06:43.512 tick 100 00:06:43.512 tick 250 00:06:43.512 tick 100 00:06:43.512 tick 100 00:06:43.512 tick 100 00:06:43.512 tick 250 00:06:43.512 tick 500 00:06:43.512 tick 100 00:06:43.512 tick 100 00:06:43.512 tick 250 00:06:43.512 tick 100 00:06:43.512 tick 100 00:06:43.512 test_end 00:06:43.512 00:06:43.512 real 0m1.523s 00:06:43.512 user 0m1.361s 00:06:43.512 sys 0m0.154s 00:06:43.512 15:11:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.512 15:11:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:43.512 ************************************ 00:06:43.512 END TEST event_reactor 00:06:43.512 ************************************ 00:06:43.512 15:11:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.512 15:11:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:43.512 15:11:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.512 15:11:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.512 ************************************ 00:06:43.512 START TEST event_reactor_perf 00:06:43.512 ************************************ 00:06:43.512 15:11:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.512 [2024-11-06 15:11:10.866677] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:43.512 [2024-11-06 15:11:10.866771] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951638 ] 00:06:43.512 [2024-11-06 15:11:11.014368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.512 [2024-11-06 15:11:11.118715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.888 test_start 00:06:44.888 test_end 00:06:44.888 Performance: 394335 events per second 00:06:44.888 00:06:44.888 real 0m1.518s 00:06:44.888 user 0m1.351s 00:06:44.888 sys 0m0.160s 00:06:44.888 15:11:12 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.888 15:11:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.888 ************************************ 00:06:44.888 END TEST event_reactor_perf 00:06:44.888 ************************************ 00:06:44.888 15:11:12 event -- event/event.sh@49 -- # uname -s 00:06:44.888 15:11:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:44.888 15:11:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:44.888 15:11:12 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.888 15:11:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.888 15:11:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.888 ************************************ 00:06:44.888 START TEST event_scheduler 00:06:44.888 ************************************ 00:06:44.888 15:11:12 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.148 * Looking for test storage... 00:06:45.148 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.148 15:11:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.148 --rc genhtml_branch_coverage=1 00:06:45.148 --rc genhtml_function_coverage=1 00:06:45.148 --rc genhtml_legend=1 00:06:45.148 --rc geninfo_all_blocks=1 00:06:45.148 --rc geninfo_unexecuted_blocks=1 00:06:45.148 00:06:45.148 ' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.148 --rc genhtml_branch_coverage=1 00:06:45.148 --rc genhtml_function_coverage=1 00:06:45.148 --rc genhtml_legend=1 00:06:45.148 --rc geninfo_all_blocks=1 00:06:45.148 --rc geninfo_unexecuted_blocks=1 00:06:45.148 00:06:45.148 ' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.148 --rc genhtml_branch_coverage=1 00:06:45.148 --rc genhtml_function_coverage=1 00:06:45.148 --rc genhtml_legend=1 00:06:45.148 --rc geninfo_all_blocks=1 00:06:45.148 --rc geninfo_unexecuted_blocks=1 00:06:45.148 00:06:45.148 ' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.148 --rc genhtml_branch_coverage=1 00:06:45.148 --rc genhtml_function_coverage=1 00:06:45.148 --rc genhtml_legend=1 00:06:45.148 --rc geninfo_all_blocks=1 00:06:45.148 --rc geninfo_unexecuted_blocks=1 00:06:45.148 00:06:45.148 ' 00:06:45.148 15:11:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.148 15:11:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2951879 00:06:45.148 15:11:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.148 15:11:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.148 15:11:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2951879 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2951879 ']' 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.148 15:11:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.148 [2024-11-06 15:11:12.719098] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:45.148 [2024-11-06 15:11:12.719215] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951879 ] 00:06:45.407 [2024-11-06 15:11:12.870954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.407 [2024-11-06 15:11:12.984794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.407 [2024-11-06 15:11:12.984908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.407 [2024-11-06 15:11:12.984938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.407 [2024-11-06 15:11:12.984970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:45.973 15:11:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.973 [2024-11-06 15:11:13.539522] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:45.973 [2024-11-06 15:11:13.539557] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:45.973 [2024-11-06 15:11:13.539577] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:45.973 [2024-11-06 15:11:13.539592] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:45.973 [2024-11-06 15:11:13.539604] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.973 15:11:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.973 15:11:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.231 [2024-11-06 15:11:13.828828] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.231 15:11:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.231 15:11:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.231 15:11:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.231 15:11:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.231 15:11:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 ************************************ 00:06:46.489 START TEST scheduler_create_thread 00:06:46.489 ************************************ 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 2 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 3 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 4 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 5 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 6 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 7 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 8 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.489 9 00:06:46.489 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.490 10 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.490 15:11:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.055 15:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.055 15:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.055 15:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.055 15:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.430 15:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.430 15:11:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.430 15:11:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.430 15:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.430 15:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.365 15:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.365 00:06:49.365 real 0m3.109s 00:06:49.365 user 0m0.024s 00:06:49.365 sys 0m0.008s 00:06:49.365 15:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.365 15:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.365 ************************************ 00:06:49.365 END TEST scheduler_create_thread 00:06:49.365 ************************************ 00:06:49.624 15:11:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.624 15:11:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2951879 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2951879 ']' 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2951879 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2951879 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2951879' 00:06:49.624 killing process with pid 2951879 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2951879 00:06:49.624 15:11:17 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2951879 00:06:49.883 [2024-11-06 15:11:17.362812] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.259 00:06:51.259 real 0m6.067s 00:06:51.259 user 0m12.301s 00:06:51.259 sys 0m0.649s 00:06:51.259 15:11:18 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.259 15:11:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.259 ************************************ 00:06:51.259 END TEST event_scheduler 00:06:51.259 ************************************ 00:06:51.259 15:11:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.259 15:11:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.259 15:11:18 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.259 15:11:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.259 15:11:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.259 ************************************ 00:06:51.259 START TEST app_repeat 00:06:51.259 ************************************ 00:06:51.259 15:11:18 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2952791 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2952791' 00:06:51.259 Process app_repeat pid: 2952791 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.259 spdk_app_start Round 0 00:06:51.259 15:11:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2952791 /var/tmp/spdk-nbd.sock 00:06:51.259 15:11:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2952791 ']' 00:06:51.259 15:11:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.259 15:11:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.260 15:11:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.260 15:11:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.260 15:11:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.260 [2024-11-06 15:11:18.660306] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:51.260 [2024-11-06 15:11:18.660441] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952791 ] 00:06:51.260 [2024-11-06 15:11:18.811146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.519 [2024-11-06 15:11:18.921478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.519 [2024-11-06 15:11:18.921509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.087 15:11:19 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.087 15:11:19 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:52.087 15:11:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.345 Malloc0 00:06:52.345 15:11:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.604 Malloc1 00:06:52.604 15:11:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.604 15:11:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.863 /dev/nbd0 00:06:52.863 15:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.863 15:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.863 1+0 records in 00:06:52.863 1+0 records out 00:06:52.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024489 s, 16.7 MB/s 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:52.863 15:11:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:52.863 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.863 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.863 15:11:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.123 /dev/nbd1 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.123 1+0 records in 00:06:53.123 1+0 records out 00:06:53.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243283 s, 16.8 MB/s 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:53.123 15:11:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.123 15:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.382 { 00:06:53.382 "nbd_device": "/dev/nbd0", 00:06:53.382 "bdev_name": "Malloc0" 00:06:53.382 }, 00:06:53.382 { 00:06:53.382 "nbd_device": "/dev/nbd1", 00:06:53.382 "bdev_name": "Malloc1" 00:06:53.382 } 00:06:53.382 ]' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.382 { 00:06:53.382 "nbd_device": "/dev/nbd0", 00:06:53.382 "bdev_name": "Malloc0" 00:06:53.382 }, 00:06:53.382 { 00:06:53.382 "nbd_device": "/dev/nbd1", 00:06:53.382 "bdev_name": "Malloc1" 00:06:53.382 } 00:06:53.382 ]' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.382 /dev/nbd1' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.382 /dev/nbd1' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.382 256+0 records in 00:06:53.382 256+0 records out 00:06:53.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115673 s, 90.6 MB/s 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.382 256+0 records in 00:06:53.382 256+0 records out 00:06:53.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224489 s, 46.7 MB/s 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.382 256+0 records in 00:06:53.382 256+0 records out 00:06:53.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251529 s, 41.7 MB/s 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.382 15:11:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.383 15:11:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.641 15:11:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.900 15:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.205 15:11:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.205 15:11:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.464 15:11:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.841 [2024-11-06 15:11:23.127292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.841 [2024-11-06 15:11:23.233043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.841 [2024-11-06 15:11:23.233048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.841 [2024-11-06 15:11:23.415033] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.841 [2024-11-06 15:11:23.415099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.744 15:11:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.744 15:11:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.744 spdk_app_start Round 1 00:06:57.744 15:11:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2952791 /var/tmp/spdk-nbd.sock 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2952791 ']' 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.744 15:11:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.744 15:11:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.744 15:11:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:57.744 15:11:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.003 Malloc0 00:06:58.003 15:11:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.262 Malloc1 00:06:58.262 15:11:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.262 15:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.522 /dev/nbd0 00:06:58.522 15:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.522 15:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.522 1+0 records in 00:06:58.522 1+0 records out 00:06:58.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251947 s, 16.3 MB/s 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:58.522 15:11:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:58.522 15:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.522 15:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.522 15:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.781 /dev/nbd1 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.781 1+0 records in 00:06:58.781 1+0 records out 00:06:58.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309639 s, 13.2 MB/s 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:58.781 15:11:26 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.781 15:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.040 { 00:06:59.040 "nbd_device": "/dev/nbd0", 00:06:59.040 "bdev_name": "Malloc0" 00:06:59.040 }, 00:06:59.040 { 00:06:59.040 "nbd_device": "/dev/nbd1", 00:06:59.040 "bdev_name": "Malloc1" 00:06:59.040 } 00:06:59.040 ]' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.040 { 00:06:59.040 "nbd_device": "/dev/nbd0", 00:06:59.040 "bdev_name": "Malloc0" 00:06:59.040 }, 00:06:59.040 { 00:06:59.040 "nbd_device": "/dev/nbd1", 00:06:59.040 "bdev_name": "Malloc1" 00:06:59.040 } 00:06:59.040 ]' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.040 /dev/nbd1' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.040 /dev/nbd1' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.040 256+0 records in 00:06:59.040 256+0 records out 00:06:59.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00871802 s, 120 MB/s 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.040 256+0 records in 00:06:59.040 256+0 records out 00:06:59.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221117 s, 47.4 MB/s 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.040 256+0 records in 00:06:59.040 256+0 records out 00:06:59.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250478 s, 41.9 MB/s 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.040 15:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.299 15:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.558 15:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.558 15:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.558 15:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.558 15:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.558 15:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.559 15:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.818 15:11:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.818 15:11:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.078 15:11:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.457 [2024-11-06 15:11:28.813387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.457 [2024-11-06 15:11:28.916744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.457 [2024-11-06 15:11:28.916763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.716 [2024-11-06 15:11:29.097812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.716 [2024-11-06 15:11:29.097877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.092 15:11:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.092 15:11:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.092 spdk_app_start Round 2 00:07:03.092 15:11:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2952791 /var/tmp/spdk-nbd.sock 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2952791 ']' 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.092 15:11:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 15:11:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.350 15:11:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:03.350 15:11:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.609 Malloc0 00:07:03.609 15:11:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.868 Malloc1 00:07:03.868 15:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.868 15:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.127 /dev/nbd0 00:07:04.127 15:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.127 15:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.127 1+0 records in 00:07:04.127 1+0 records out 00:07:04.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263977 s, 15.5 MB/s 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:04.127 15:11:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:04.127 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.127 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.127 15:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.387 /dev/nbd1 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.387 1+0 records in 00:07:04.387 1+0 records out 00:07:04.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243137 s, 16.8 MB/s 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:04.387 15:11:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.387 15:11:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.646 15:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.646 { 00:07:04.646 "nbd_device": "/dev/nbd0", 00:07:04.646 "bdev_name": "Malloc0" 00:07:04.646 }, 00:07:04.646 { 00:07:04.646 "nbd_device": "/dev/nbd1", 00:07:04.646 "bdev_name": "Malloc1" 00:07:04.646 } 00:07:04.646 ]' 00:07:04.646 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.646 { 00:07:04.646 "nbd_device": "/dev/nbd0", 00:07:04.646 "bdev_name": "Malloc0" 00:07:04.646 }, 00:07:04.646 { 00:07:04.646 "nbd_device": "/dev/nbd1", 00:07:04.646 "bdev_name": "Malloc1" 00:07:04.646 } 00:07:04.646 ]' 00:07:04.646 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.646 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.647 /dev/nbd1' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.647 /dev/nbd1' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.647 256+0 records in 00:07:04.647 256+0 records out 00:07:04.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110974 s, 94.5 MB/s 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.647 256+0 records in 00:07:04.647 256+0 records out 00:07:04.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223243 s, 47.0 MB/s 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.647 256+0 records in 00:07:04.647 256+0 records out 00:07:04.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250417 s, 41.9 MB/s 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.647 15:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.906 15:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.165 15:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.424 15:11:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.424 15:11:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.993 15:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.932 [2024-11-06 15:11:34.498822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.191 [2024-11-06 15:11:34.604074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.191 [2024-11-06 15:11:34.604079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.191 [2024-11-06 15:11:34.771205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.191 [2024-11-06 15:11:34.771264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.094 15:11:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2952791 /var/tmp/spdk-nbd.sock 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2952791 ']' 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:09.094 15:11:36 event.app_repeat -- event/event.sh@39 -- # killprocess 2952791 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2952791 ']' 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2952791 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2952791 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2952791' 00:07:09.094 killing process with pid 2952791 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2952791 00:07:09.094 15:11:36 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2952791 00:07:10.031 spdk_app_start is called in Round 0. 00:07:10.031 Shutdown signal received, stop current app iteration 00:07:10.031 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:10.031 spdk_app_start is called in Round 1. 00:07:10.031 Shutdown signal received, stop current app iteration 00:07:10.031 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:10.031 spdk_app_start is called in Round 2. 00:07:10.031 Shutdown signal received, stop current app iteration 00:07:10.031 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:10.031 spdk_app_start is called in Round 3. 00:07:10.031 Shutdown signal received, stop current app iteration 00:07:10.031 15:11:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.031 15:11:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.031 00:07:10.031 real 0m19.014s 00:07:10.031 user 0m39.815s 00:07:10.031 sys 0m3.301s 00:07:10.031 15:11:37 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.031 15:11:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.031 ************************************ 00:07:10.031 END TEST app_repeat 00:07:10.031 ************************************ 00:07:10.031 15:11:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.031 15:11:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.031 15:11:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.031 15:11:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.031 15:11:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.290 ************************************ 00:07:10.290 START TEST cpu_locks 00:07:10.290 ************************************ 00:07:10.290 15:11:37 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.290 * Looking for test storage... 00:07:10.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:10.290 15:11:37 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.290 15:11:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.290 15:11:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.290 15:11:37 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.290 15:11:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.291 15:11:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.291 --rc genhtml_branch_coverage=1 00:07:10.291 --rc genhtml_function_coverage=1 00:07:10.291 --rc genhtml_legend=1 00:07:10.291 --rc geninfo_all_blocks=1 00:07:10.291 --rc geninfo_unexecuted_blocks=1 00:07:10.291 00:07:10.291 ' 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.291 --rc genhtml_branch_coverage=1 00:07:10.291 --rc genhtml_function_coverage=1 00:07:10.291 --rc genhtml_legend=1 00:07:10.291 --rc geninfo_all_blocks=1 00:07:10.291 --rc geninfo_unexecuted_blocks=1 00:07:10.291 00:07:10.291 ' 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.291 --rc genhtml_branch_coverage=1 00:07:10.291 --rc genhtml_function_coverage=1 00:07:10.291 --rc genhtml_legend=1 00:07:10.291 --rc geninfo_all_blocks=1 00:07:10.291 --rc geninfo_unexecuted_blocks=1 00:07:10.291 00:07:10.291 ' 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.291 --rc genhtml_branch_coverage=1 00:07:10.291 --rc genhtml_function_coverage=1 00:07:10.291 --rc genhtml_legend=1 00:07:10.291 --rc geninfo_all_blocks=1 00:07:10.291 --rc geninfo_unexecuted_blocks=1 00:07:10.291 00:07:10.291 ' 00:07:10.291 15:11:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.291 15:11:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.291 15:11:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.291 15:11:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.291 15:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.550 ************************************ 00:07:10.550 START TEST default_locks 00:07:10.550 ************************************ 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2955574 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2955574 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2955574 ']' 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.550 15:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.550 [2024-11-06 15:11:38.042417] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:10.550 [2024-11-06 15:11:38.042541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955574 ] 00:07:10.809 [2024-11-06 15:11:38.191259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.809 [2024-11-06 15:11:38.298947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.746 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.746 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:11.746 15:11:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2955574 00:07:11.746 15:11:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2955574 00:07:11.746 15:11:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.313 lslocks: write error 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2955574 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2955574 ']' 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2955574 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2955574 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2955574' 00:07:12.313 killing process with pid 2955574 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2955574 00:07:12.313 15:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2955574 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2955574 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2955574 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2955574 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2955574 ']' 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.848 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.848 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2955574) - No such process 00:07:14.848 ERROR: process (pid: 2955574) is no longer running 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.849 00:07:14.849 real 0m4.120s 00:07:14.849 user 0m4.051s 00:07:14.849 sys 0m0.870s 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.849 15:11:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.849 ************************************ 00:07:14.849 END TEST default_locks 00:07:14.849 ************************************ 00:07:14.849 15:11:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:14.849 15:11:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.849 15:11:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.849 15:11:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.849 ************************************ 00:07:14.849 START TEST default_locks_via_rpc 00:07:14.849 ************************************ 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2956188 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2956188 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2956188 ']' 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.849 15:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.849 [2024-11-06 15:11:42.250105] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:14.849 [2024-11-06 15:11:42.250229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956188 ] 00:07:14.849 [2024-11-06 15:11:42.393600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.107 [2024-11-06 15:11:42.501363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.677 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2956188 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2956188 00:07:15.678 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2956188 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2956188 ']' 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2956188 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2956188 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2956188' 00:07:16.245 killing process with pid 2956188 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2956188 00:07:16.245 15:11:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2956188 00:07:18.782 00:07:18.782 real 0m3.987s 00:07:18.782 user 0m3.943s 00:07:18.782 sys 0m0.797s 00:07:18.782 15:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.782 15:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.782 ************************************ 00:07:18.782 END TEST default_locks_via_rpc 00:07:18.782 ************************************ 00:07:18.782 15:11:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:18.782 15:11:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.782 15:11:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.782 15:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.782 ************************************ 00:07:18.782 START TEST non_locking_app_on_locked_coremask 00:07:18.782 ************************************ 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2956713 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2956713 /var/tmp/spdk.sock 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2956713 ']' 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.782 15:11:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.782 [2024-11-06 15:11:46.324300] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:18.782 [2024-11-06 15:11:46.324410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956713 ] 00:07:19.040 [2024-11-06 15:11:46.475330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.040 [2024-11-06 15:11:46.587903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.975 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.975 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:19.975 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2956793 00:07:19.975 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2956793 /var/tmp/spdk2.sock 00:07:19.975 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2956793 ']' 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:19.976 15:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.976 [2024-11-06 15:11:47.462834] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:19.976 [2024-11-06 15:11:47.462945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956793 ] 00:07:20.234 [2024-11-06 15:11:47.653517] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.234 [2024-11-06 15:11:47.653567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.234 [2024-11-06 15:11:47.868180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.769 15:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.769 15:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:22.769 15:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2956713 00:07:22.769 15:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2956713 00:07:22.769 15:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.028 lslocks: write error 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2956713 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2956713 ']' 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2956713 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2956713 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2956713' 00:07:23.028 killing process with pid 2956713 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2956713 00:07:23.028 15:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2956713 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2956793 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2956793 ']' 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2956793 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2956793 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2956793' 00:07:28.300 killing process with pid 2956793 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2956793 00:07:28.300 15:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2956793 00:07:30.204 00:07:30.204 real 0m11.124s 00:07:30.204 user 0m11.285s 00:07:30.204 sys 0m1.460s 00:07:30.204 15:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.204 15:11:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.204 ************************************ 00:07:30.204 END TEST non_locking_app_on_locked_coremask 00:07:30.204 ************************************ 00:07:30.204 15:11:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:30.204 15:11:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.204 15:11:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.204 15:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.204 ************************************ 00:07:30.204 START TEST locking_app_on_unlocked_coremask 00:07:30.204 ************************************ 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2958257 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2958257 /var/tmp/spdk.sock 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2958257 ']' 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.204 15:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.204 [2024-11-06 15:11:57.523055] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:30.204 [2024-11-06 15:11:57.523185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958257 ] 00:07:30.204 [2024-11-06 15:11:57.671228] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.204 [2024-11-06 15:11:57.671284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.204 [2024-11-06 15:11:57.776121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2958311 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2958311 /var/tmp/spdk2.sock 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2958311 ']' 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.142 15:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.142 [2024-11-06 15:11:58.642036] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:31.142 [2024-11-06 15:11:58.642152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958311 ] 00:07:31.401 [2024-11-06 15:11:58.831146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.660 [2024-11-06 15:11:59.041817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.568 15:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.568 15:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:33.568 15:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2958311 00:07:33.568 15:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.568 15:12:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2958311 00:07:34.504 lslocks: write error 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2958257 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2958257 ']' 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2958257 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2958257 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2958257' 00:07:34.504 killing process with pid 2958257 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2958257 00:07:34.504 15:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2958257 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2958311 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2958311 ']' 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2958311 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2958311 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2958311' 00:07:39.777 killing process with pid 2958311 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2958311 00:07:39.777 15:12:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2958311 00:07:41.686 00:07:41.686 real 0m11.582s 00:07:41.686 user 0m11.784s 00:07:41.686 sys 0m1.610s 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.686 ************************************ 00:07:41.686 END TEST locking_app_on_unlocked_coremask 00:07:41.686 ************************************ 00:07:41.686 15:12:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:41.686 15:12:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.686 15:12:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.686 15:12:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.686 ************************************ 00:07:41.686 START TEST locking_app_on_locked_coremask 00:07:41.686 ************************************ 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2960268 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2960268 /var/tmp/spdk.sock 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2960268 ']' 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.686 15:12:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.686 [2024-11-06 15:12:09.187963] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:41.686 [2024-11-06 15:12:09.188065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960268 ] 00:07:41.945 [2024-11-06 15:12:09.333134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.946 [2024-11-06 15:12:09.444068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2960450 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2960450 /var/tmp/spdk2.sock 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2960450 /var/tmp/spdk2.sock 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2960450 /var/tmp/spdk2.sock 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2960450 ']' 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.884 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.884 [2024-11-06 15:12:10.320664] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:42.884 [2024-11-06 15:12:10.320773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960450 ] 00:07:42.884 [2024-11-06 15:12:10.503373] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2960268 has claimed it. 00:07:42.884 [2024-11-06 15:12:10.503437] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:43.453 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2960450) - No such process 00:07:43.453 ERROR: process (pid: 2960450) is no longer running 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2960268 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2960268 00:07:43.453 15:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.023 lslocks: write error 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2960268 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2960268 ']' 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2960268 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2960268 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2960268' 00:07:44.023 killing process with pid 2960268 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2960268 00:07:44.023 15:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2960268 00:07:46.562 00:07:46.562 real 0m4.564s 00:07:46.562 user 0m4.680s 00:07:46.562 sys 0m0.963s 00:07:46.562 15:12:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.562 15:12:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.562 ************************************ 00:07:46.562 END TEST locking_app_on_locked_coremask 00:07:46.562 ************************************ 00:07:46.562 15:12:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:46.562 15:12:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.562 15:12:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.562 15:12:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.562 ************************************ 00:07:46.562 START TEST locking_overlapped_coremask 00:07:46.562 ************************************ 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2960870 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2960870 /var/tmp/spdk.sock 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2960870 ']' 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.562 15:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.562 [2024-11-06 15:12:13.842839] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:46.563 [2024-11-06 15:12:13.842949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960870 ] 00:07:46.563 [2024-11-06 15:12:13.994342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.563 [2024-11-06 15:12:14.106798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.563 [2024-11-06 15:12:14.106859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.563 [2024-11-06 15:12:14.106884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2961049 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2961049 /var/tmp/spdk2.sock 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2961049 /var/tmp/spdk2.sock 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2961049 /var/tmp/spdk2.sock 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2961049 ']' 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.503 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.504 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.504 15:12:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.504 [2024-11-06 15:12:15.007855] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:47.504 [2024-11-06 15:12:15.007964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961049 ] 00:07:47.763 [2024-11-06 15:12:15.199948] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2960870 has claimed it. 00:07:47.763 [2024-11-06 15:12:15.200014] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.022 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2961049) - No such process 00:07:48.022 ERROR: process (pid: 2961049) is no longer running 00:07:48.022 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2960870 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2960870 ']' 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2960870 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.023 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2960870 00:07:48.282 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.283 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.283 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2960870' 00:07:48.283 killing process with pid 2960870 00:07:48.283 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2960870 00:07:48.283 15:12:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2960870 00:07:50.823 00:07:50.823 real 0m4.278s 00:07:50.823 user 0m11.610s 00:07:50.823 sys 0m0.757s 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.823 ************************************ 00:07:50.823 END TEST locking_overlapped_coremask 00:07:50.823 ************************************ 00:07:50.823 15:12:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:50.823 15:12:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.823 15:12:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.823 15:12:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.823 ************************************ 00:07:50.823 START TEST locking_overlapped_coremask_via_rpc 00:07:50.823 ************************************ 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2961548 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2961548 /var/tmp/spdk.sock 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2961548 ']' 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.823 15:12:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.823 [2024-11-06 15:12:18.205613] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:50.823 [2024-11-06 15:12:18.205724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961548 ] 00:07:50.823 [2024-11-06 15:12:18.351895] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.823 [2024-11-06 15:12:18.351945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.082 [2024-11-06 15:12:18.460733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.082 [2024-11-06 15:12:18.460795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.082 [2024-11-06 15:12:18.460822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2961638 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2961638 /var/tmp/spdk2.sock 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2961638 ']' 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.650 15:12:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.910 [2024-11-06 15:12:19.354759] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:51.910 [2024-11-06 15:12:19.354862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961638 ] 00:07:52.169 [2024-11-06 15:12:19.547972] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.169 [2024-11-06 15:12:19.548027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.169 [2024-11-06 15:12:19.786388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.169 [2024-11-06 15:12:19.786450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.169 [2024-11-06 15:12:19.786481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.713 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.714 [2024-11-06 15:12:21.874268] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2961548 has claimed it. 00:07:54.714 request: 00:07:54.714 { 00:07:54.714 "method": "framework_enable_cpumask_locks", 00:07:54.714 "req_id": 1 00:07:54.714 } 00:07:54.714 Got JSON-RPC error response 00:07:54.714 response: 00:07:54.714 { 00:07:54.714 "code": -32603, 00:07:54.714 "message": "Failed to claim CPU core: 2" 00:07:54.714 } 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2961548 /var/tmp/spdk.sock 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2961548 ']' 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.714 15:12:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2961638 /var/tmp/spdk2.sock 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2961638 ']' 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:54.714 00:07:54.714 real 0m4.196s 00:07:54.714 user 0m1.133s 00:07:54.714 sys 0m0.239s 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.714 15:12:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.714 ************************************ 00:07:54.714 END TEST locking_overlapped_coremask_via_rpc 00:07:54.714 ************************************ 00:07:54.714 15:12:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:54.714 15:12:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2961548 ]] 00:07:54.714 15:12:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2961548 00:07:54.714 15:12:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2961548 ']' 00:07:54.714 15:12:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2961548 00:07:54.714 15:12:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2961548 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2961548' 00:07:54.972 killing process with pid 2961548 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2961548 00:07:54.972 15:12:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2961548 00:07:57.540 15:12:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2961638 ]] 00:07:57.540 15:12:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2961638 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2961638 ']' 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2961638 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2961638 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2961638' 00:07:57.540 killing process with pid 2961638 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2961638 00:07:57.540 15:12:24 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2961638 00:07:59.623 15:12:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:59.623 15:12:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:59.623 15:12:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2961548 ]] 00:07:59.623 15:12:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2961548 00:07:59.623 15:12:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2961548 ']' 00:07:59.623 15:12:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2961548 00:07:59.623 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2961548) - No such process 00:07:59.623 15:12:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2961548 is not found' 00:07:59.623 Process with pid 2961548 is not found 00:07:59.882 15:12:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2961638 ]] 00:07:59.882 15:12:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2961638 00:07:59.882 15:12:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2961638 ']' 00:07:59.882 15:12:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2961638 00:07:59.882 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2961638) - No such process 00:07:59.882 15:12:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2961638 is not found' 00:07:59.882 Process with pid 2961638 is not found 00:07:59.882 15:12:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:59.882 00:07:59.882 real 0m49.566s 00:07:59.882 user 1m24.000s 00:07:59.882 sys 0m8.207s 00:07:59.882 15:12:27 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.882 15:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.882 ************************************ 00:07:59.882 END TEST cpu_locks 00:07:59.882 ************************************ 00:07:59.882 00:07:59.882 real 1m19.959s 00:07:59.882 user 2m23.503s 00:07:59.882 sys 0m13.117s 00:07:59.882 15:12:27 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.882 15:12:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.882 ************************************ 00:07:59.882 END TEST event 00:07:59.882 ************************************ 00:07:59.882 15:12:27 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:59.882 15:12:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.882 15:12:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.882 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.882 ************************************ 00:07:59.882 START TEST thread 00:07:59.882 ************************************ 00:07:59.882 15:12:27 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:59.882 * Looking for test storage... 00:07:59.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:59.882 15:12:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:59.882 15:12:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:59.882 15:12:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.141 15:12:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.141 15:12:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.141 15:12:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.141 15:12:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.141 15:12:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.141 15:12:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.141 15:12:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.141 15:12:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.141 15:12:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.141 15:12:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.141 15:12:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.141 15:12:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:00.141 15:12:27 thread -- scripts/common.sh@345 -- # : 1 00:08:00.141 15:12:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.141 15:12:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.141 15:12:27 thread -- scripts/common.sh@365 -- # decimal 1 00:08:00.141 15:12:27 thread -- scripts/common.sh@353 -- # local d=1 00:08:00.141 15:12:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.141 15:12:27 thread -- scripts/common.sh@355 -- # echo 1 00:08:00.141 15:12:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.141 15:12:27 thread -- scripts/common.sh@366 -- # decimal 2 00:08:00.141 15:12:27 thread -- scripts/common.sh@353 -- # local d=2 00:08:00.141 15:12:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.141 15:12:27 thread -- scripts/common.sh@355 -- # echo 2 00:08:00.141 15:12:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.141 15:12:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.141 15:12:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.141 15:12:27 thread -- scripts/common.sh@368 -- # return 0 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.141 --rc genhtml_branch_coverage=1 00:08:00.141 --rc genhtml_function_coverage=1 00:08:00.141 --rc genhtml_legend=1 00:08:00.141 --rc geninfo_all_blocks=1 00:08:00.141 --rc geninfo_unexecuted_blocks=1 00:08:00.141 00:08:00.141 ' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.141 --rc genhtml_branch_coverage=1 00:08:00.141 --rc genhtml_function_coverage=1 00:08:00.141 --rc genhtml_legend=1 00:08:00.141 --rc geninfo_all_blocks=1 00:08:00.141 --rc geninfo_unexecuted_blocks=1 00:08:00.141 00:08:00.141 ' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.141 --rc genhtml_branch_coverage=1 00:08:00.141 --rc genhtml_function_coverage=1 00:08:00.141 --rc genhtml_legend=1 00:08:00.141 --rc geninfo_all_blocks=1 00:08:00.141 --rc geninfo_unexecuted_blocks=1 00:08:00.141 00:08:00.141 ' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.141 --rc genhtml_branch_coverage=1 00:08:00.141 --rc genhtml_function_coverage=1 00:08:00.141 --rc genhtml_legend=1 00:08:00.141 --rc geninfo_all_blocks=1 00:08:00.141 --rc geninfo_unexecuted_blocks=1 00:08:00.141 00:08:00.141 ' 00:08:00.141 15:12:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.141 15:12:27 thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.141 ************************************ 00:08:00.141 START TEST thread_poller_perf 00:08:00.141 ************************************ 00:08:00.141 15:12:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:00.141 [2024-11-06 15:12:27.666887] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:00.142 [2024-11-06 15:12:27.666976] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962856 ] 00:08:00.400 [2024-11-06 15:12:27.818537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.401 [2024-11-06 15:12:27.928110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.401 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:01.776 [2024-11-06T14:12:29.411Z] ====================================== 00:08:01.776 [2024-11-06T14:12:29.411Z] busy:2311235284 (cyc) 00:08:01.776 [2024-11-06T14:12:29.411Z] total_run_count: 411000 00:08:01.776 [2024-11-06T14:12:29.411Z] tsc_hz: 2300000000 (cyc) 00:08:01.776 [2024-11-06T14:12:29.411Z] ====================================== 00:08:01.776 [2024-11-06T14:12:29.411Z] poller_cost: 5623 (cyc), 2444 (nsec) 00:08:01.776 00:08:01.776 real 0m1.530s 00:08:01.776 user 0m1.364s 00:08:01.776 sys 0m0.160s 00:08:01.776 15:12:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.776 15:12:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.776 ************************************ 00:08:01.776 END TEST thread_poller_perf 00:08:01.776 ************************************ 00:08:01.776 15:12:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:01.776 15:12:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:01.776 15:12:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.776 15:12:29 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.776 ************************************ 00:08:01.776 START TEST thread_poller_perf 00:08:01.776 ************************************ 00:08:01.776 15:12:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:01.776 [2024-11-06 15:12:29.280438] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:01.776 [2024-11-06 15:12:29.280522] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963170 ] 00:08:02.035 [2024-11-06 15:12:29.430253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.035 [2024-11-06 15:12:29.537623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.035 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:03.411 [2024-11-06T14:12:31.046Z] ====================================== 00:08:03.411 [2024-11-06T14:12:31.046Z] busy:2302900094 (cyc) 00:08:03.411 [2024-11-06T14:12:31.046Z] total_run_count: 5240000 00:08:03.411 [2024-11-06T14:12:31.046Z] tsc_hz: 2300000000 (cyc) 00:08:03.411 [2024-11-06T14:12:31.046Z] ====================================== 00:08:03.411 [2024-11-06T14:12:31.046Z] poller_cost: 439 (cyc), 190 (nsec) 00:08:03.411 00:08:03.411 real 0m1.523s 00:08:03.411 user 0m1.362s 00:08:03.411 sys 0m0.154s 00:08:03.411 15:12:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.411 15:12:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.411 ************************************ 00:08:03.411 END TEST thread_poller_perf 00:08:03.411 ************************************ 00:08:03.411 15:12:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:03.411 00:08:03.411 real 0m3.411s 00:08:03.411 user 0m2.898s 00:08:03.411 sys 0m0.530s 00:08:03.411 15:12:30 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.411 15:12:30 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.411 ************************************ 00:08:03.411 END TEST thread 00:08:03.411 ************************************ 00:08:03.411 15:12:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:03.411 15:12:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:03.411 15:12:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.411 15:12:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.411 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:08:03.411 ************************************ 00:08:03.411 START TEST app_cmdline 00:08:03.411 ************************************ 00:08:03.411 15:12:30 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:03.411 * Looking for test storage... 00:08:03.411 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:03.411 15:12:30 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:03.411 15:12:30 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:03.411 15:12:30 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.670 15:12:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.670 --rc genhtml_branch_coverage=1 00:08:03.670 --rc genhtml_function_coverage=1 00:08:03.670 --rc genhtml_legend=1 00:08:03.670 --rc geninfo_all_blocks=1 00:08:03.670 --rc geninfo_unexecuted_blocks=1 00:08:03.670 00:08:03.670 ' 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.670 --rc genhtml_branch_coverage=1 00:08:03.670 --rc genhtml_function_coverage=1 00:08:03.670 --rc genhtml_legend=1 00:08:03.670 --rc geninfo_all_blocks=1 00:08:03.670 --rc geninfo_unexecuted_blocks=1 00:08:03.670 00:08:03.670 ' 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.670 --rc genhtml_branch_coverage=1 00:08:03.670 --rc genhtml_function_coverage=1 00:08:03.670 --rc genhtml_legend=1 00:08:03.670 --rc geninfo_all_blocks=1 00:08:03.670 --rc geninfo_unexecuted_blocks=1 00:08:03.670 00:08:03.670 ' 00:08:03.670 15:12:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.670 --rc genhtml_branch_coverage=1 00:08:03.670 --rc genhtml_function_coverage=1 00:08:03.670 --rc genhtml_legend=1 00:08:03.670 --rc geninfo_all_blocks=1 00:08:03.670 --rc geninfo_unexecuted_blocks=1 00:08:03.670 00:08:03.670 ' 00:08:03.671 15:12:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:03.671 15:12:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2963474 00:08:03.671 15:12:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2963474 00:08:03.671 15:12:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2963474 ']' 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.671 15:12:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:03.671 [2024-11-06 15:12:31.185973] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:03.671 [2024-11-06 15:12:31.186094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963474 ] 00:08:03.929 [2024-11-06 15:12:31.333036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.929 [2024-11-06 15:12:31.446085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:04.865 { 00:08:04.865 "version": "SPDK v25.01-pre git sha1 d1c46ed8e", 00:08:04.865 "fields": { 00:08:04.865 "major": 25, 00:08:04.865 "minor": 1, 00:08:04.865 "patch": 0, 00:08:04.865 "suffix": "-pre", 00:08:04.865 "commit": "d1c46ed8e" 00:08:04.865 } 00:08:04.865 } 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:04.865 15:12:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:04.865 15:12:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.866 15:12:32 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:05.124 request: 00:08:05.124 { 00:08:05.124 "method": "env_dpdk_get_mem_stats", 00:08:05.124 "req_id": 1 00:08:05.124 } 00:08:05.124 Got JSON-RPC error response 00:08:05.124 response: 00:08:05.124 { 00:08:05.124 "code": -32601, 00:08:05.124 "message": "Method not found" 00:08:05.124 } 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.124 15:12:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2963474 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2963474 ']' 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2963474 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.124 15:12:32 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2963474 00:08:05.125 15:12:32 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.125 15:12:32 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.125 15:12:32 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2963474' 00:08:05.125 killing process with pid 2963474 00:08:05.125 15:12:32 app_cmdline -- common/autotest_common.sh@971 -- # kill 2963474 00:08:05.125 15:12:32 app_cmdline -- common/autotest_common.sh@976 -- # wait 2963474 00:08:07.658 00:08:07.658 real 0m4.111s 00:08:07.658 user 0m4.262s 00:08:07.658 sys 0m0.728s 00:08:07.658 15:12:34 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.658 15:12:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:07.658 ************************************ 00:08:07.658 END TEST app_cmdline 00:08:07.658 ************************************ 00:08:07.658 15:12:35 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:07.658 15:12:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.658 15:12:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.658 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.658 ************************************ 00:08:07.658 START TEST version 00:08:07.658 ************************************ 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:07.658 * Looking for test storage... 00:08:07.658 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.658 15:12:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.658 15:12:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.658 15:12:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.658 15:12:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.658 15:12:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.658 15:12:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.658 15:12:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.658 15:12:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.658 15:12:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.658 15:12:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.658 15:12:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.658 15:12:35 version -- scripts/common.sh@344 -- # case "$op" in 00:08:07.658 15:12:35 version -- scripts/common.sh@345 -- # : 1 00:08:07.658 15:12:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.658 15:12:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.658 15:12:35 version -- scripts/common.sh@365 -- # decimal 1 00:08:07.658 15:12:35 version -- scripts/common.sh@353 -- # local d=1 00:08:07.658 15:12:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.658 15:12:35 version -- scripts/common.sh@355 -- # echo 1 00:08:07.658 15:12:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.658 15:12:35 version -- scripts/common.sh@366 -- # decimal 2 00:08:07.658 15:12:35 version -- scripts/common.sh@353 -- # local d=2 00:08:07.658 15:12:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.658 15:12:35 version -- scripts/common.sh@355 -- # echo 2 00:08:07.658 15:12:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.658 15:12:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.658 15:12:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.658 15:12:35 version -- scripts/common.sh@368 -- # return 0 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.658 --rc genhtml_branch_coverage=1 00:08:07.658 --rc genhtml_function_coverage=1 00:08:07.658 --rc genhtml_legend=1 00:08:07.658 --rc geninfo_all_blocks=1 00:08:07.658 --rc geninfo_unexecuted_blocks=1 00:08:07.658 00:08:07.658 ' 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.658 --rc genhtml_branch_coverage=1 00:08:07.658 --rc genhtml_function_coverage=1 00:08:07.658 --rc genhtml_legend=1 00:08:07.658 --rc geninfo_all_blocks=1 00:08:07.658 --rc geninfo_unexecuted_blocks=1 00:08:07.658 00:08:07.658 ' 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.658 --rc genhtml_branch_coverage=1 00:08:07.658 --rc genhtml_function_coverage=1 00:08:07.658 --rc genhtml_legend=1 00:08:07.658 --rc geninfo_all_blocks=1 00:08:07.658 --rc geninfo_unexecuted_blocks=1 00:08:07.658 00:08:07.658 ' 00:08:07.658 15:12:35 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.658 --rc genhtml_branch_coverage=1 00:08:07.658 --rc genhtml_function_coverage=1 00:08:07.658 --rc genhtml_legend=1 00:08:07.658 --rc geninfo_all_blocks=1 00:08:07.658 --rc geninfo_unexecuted_blocks=1 00:08:07.658 00:08:07.658 ' 00:08:07.658 15:12:35 version -- app/version.sh@17 -- # get_header_version major 00:08:07.658 15:12:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:07.658 15:12:35 version -- app/version.sh@14 -- # cut -f2 00:08:07.658 15:12:35 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.658 15:12:35 version -- app/version.sh@17 -- # major=25 00:08:07.658 15:12:35 version -- app/version.sh@18 -- # get_header_version minor 00:08:07.658 15:12:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:07.658 15:12:35 version -- app/version.sh@14 -- # cut -f2 00:08:07.658 15:12:35 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.918 15:12:35 version -- app/version.sh@18 -- # minor=1 00:08:07.918 15:12:35 version -- app/version.sh@19 -- # get_header_version patch 00:08:07.918 15:12:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:07.918 15:12:35 version -- app/version.sh@14 -- # cut -f2 00:08:07.918 15:12:35 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.918 15:12:35 version -- app/version.sh@19 -- # patch=0 00:08:07.918 15:12:35 version -- app/version.sh@20 -- # get_header_version suffix 00:08:07.918 15:12:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:07.918 15:12:35 version -- app/version.sh@14 -- # cut -f2 00:08:07.918 15:12:35 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.918 15:12:35 version -- app/version.sh@20 -- # suffix=-pre 00:08:07.918 15:12:35 version -- app/version.sh@22 -- # version=25.1 00:08:07.918 15:12:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:07.918 15:12:35 version -- app/version.sh@28 -- # version=25.1rc0 00:08:07.918 15:12:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:07.918 15:12:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:07.918 15:12:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:07.918 15:12:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:07.918 00:08:07.918 real 0m0.276s 00:08:07.918 user 0m0.155s 00:08:07.918 sys 0m0.177s 00:08:07.918 15:12:35 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.918 15:12:35 version -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 ************************************ 00:08:07.918 END TEST version 00:08:07.918 ************************************ 00:08:07.918 15:12:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:07.918 15:12:35 -- spdk/autotest.sh@194 -- # uname -s 00:08:07.918 15:12:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:07.918 15:12:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:07.918 15:12:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:07.918 15:12:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:07.918 15:12:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.918 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 15:12:35 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:07.918 15:12:35 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:08:07.918 15:12:35 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:07.918 15:12:35 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.918 15:12:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.918 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 ************************************ 00:08:07.918 START TEST nvmf_rdma 00:08:07.918 ************************************ 00:08:07.918 15:12:35 nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:08.178 * Looking for test storage... 00:08:08.178 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.178 15:12:35 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.178 --rc genhtml_branch_coverage=1 00:08:08.178 --rc genhtml_function_coverage=1 00:08:08.178 --rc genhtml_legend=1 00:08:08.178 --rc geninfo_all_blocks=1 00:08:08.178 --rc geninfo_unexecuted_blocks=1 00:08:08.178 00:08:08.178 ' 00:08:08.178 15:12:35 nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.178 --rc genhtml_branch_coverage=1 00:08:08.178 --rc genhtml_function_coverage=1 00:08:08.178 --rc genhtml_legend=1 00:08:08.178 --rc geninfo_all_blocks=1 00:08:08.179 --rc geninfo_unexecuted_blocks=1 00:08:08.179 00:08:08.179 ' 00:08:08.179 15:12:35 nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.179 --rc genhtml_branch_coverage=1 00:08:08.179 --rc genhtml_function_coverage=1 00:08:08.179 --rc genhtml_legend=1 00:08:08.179 --rc geninfo_all_blocks=1 00:08:08.179 --rc geninfo_unexecuted_blocks=1 00:08:08.179 00:08:08.179 ' 00:08:08.179 15:12:35 nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.179 --rc genhtml_branch_coverage=1 00:08:08.179 --rc genhtml_function_coverage=1 00:08:08.179 --rc genhtml_legend=1 00:08:08.179 --rc geninfo_all_blocks=1 00:08:08.179 --rc geninfo_unexecuted_blocks=1 00:08:08.179 00:08:08.179 ' 00:08:08.179 15:12:35 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:08.179 15:12:35 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:08.179 15:12:35 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:08.179 15:12:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:08.179 15:12:35 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.179 15:12:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:08.179 ************************************ 00:08:08.179 START TEST nvmf_target_core 00:08:08.179 ************************************ 00:08:08.179 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:08.439 * Looking for test storage... 00:08:08.439 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.439 --rc genhtml_branch_coverage=1 00:08:08.439 --rc genhtml_function_coverage=1 00:08:08.439 --rc genhtml_legend=1 00:08:08.439 --rc geninfo_all_blocks=1 00:08:08.439 --rc geninfo_unexecuted_blocks=1 00:08:08.439 00:08:08.439 ' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.439 --rc genhtml_branch_coverage=1 00:08:08.439 --rc genhtml_function_coverage=1 00:08:08.439 --rc genhtml_legend=1 00:08:08.439 --rc geninfo_all_blocks=1 00:08:08.439 --rc geninfo_unexecuted_blocks=1 00:08:08.439 00:08:08.439 ' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.439 --rc genhtml_branch_coverage=1 00:08:08.439 --rc genhtml_function_coverage=1 00:08:08.439 --rc genhtml_legend=1 00:08:08.439 --rc geninfo_all_blocks=1 00:08:08.439 --rc geninfo_unexecuted_blocks=1 00:08:08.439 00:08:08.439 ' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.439 --rc genhtml_branch_coverage=1 00:08:08.439 --rc genhtml_function_coverage=1 00:08:08.439 --rc genhtml_legend=1 00:08:08.439 --rc geninfo_all_blocks=1 00:08:08.439 --rc geninfo_unexecuted_blocks=1 00:08:08.439 00:08:08.439 ' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.439 15:12:35 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.440 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.440 15:12:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.440 ************************************ 00:08:08.440 START TEST nvmf_abort 00:08:08.440 ************************************ 00:08:08.440 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:08.700 * Looking for test storage... 00:08:08.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.700 --rc genhtml_branch_coverage=1 00:08:08.700 --rc genhtml_function_coverage=1 00:08:08.700 --rc genhtml_legend=1 00:08:08.700 --rc geninfo_all_blocks=1 00:08:08.700 --rc geninfo_unexecuted_blocks=1 00:08:08.700 00:08:08.700 ' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.700 --rc genhtml_branch_coverage=1 00:08:08.700 --rc genhtml_function_coverage=1 00:08:08.700 --rc genhtml_legend=1 00:08:08.700 --rc geninfo_all_blocks=1 00:08:08.700 --rc geninfo_unexecuted_blocks=1 00:08:08.700 00:08:08.700 ' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.700 --rc genhtml_branch_coverage=1 00:08:08.700 --rc genhtml_function_coverage=1 00:08:08.700 --rc genhtml_legend=1 00:08:08.700 --rc geninfo_all_blocks=1 00:08:08.700 --rc geninfo_unexecuted_blocks=1 00:08:08.700 00:08:08.700 ' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.700 --rc genhtml_branch_coverage=1 00:08:08.700 --rc genhtml_function_coverage=1 00:08:08.700 --rc genhtml_legend=1 00:08:08.700 --rc geninfo_all_blocks=1 00:08:08.700 --rc geninfo_unexecuted_blocks=1 00:08:08.700 00:08:08.700 ' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.700 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.701 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.701 15:12:36 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.823 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:16.824 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:16.824 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:16.824 Found net devices under 0000:18:00.0: mlx_0_0 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:16.824 Found net devices under 0000:18:00.1: mlx_0_1 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.824 15:12:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:16.824 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.824 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:16.824 altname enp24s0f0np0 00:08:16.824 altname ens785f0np0 00:08:16.824 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.824 valid_lft forever preferred_lft forever 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:16.824 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.824 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:16.824 altname enp24s0f1np1 00:08:16.824 altname ens785f1np1 00:08:16.824 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.824 valid_lft forever preferred_lft forever 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.824 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.825 192.168.100.9' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:16.825 192.168.100.9' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:16.825 192.168.100.9' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2967216 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2967216 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2967216 ']' 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.825 15:12:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.825 [2024-11-06 15:12:43.265950] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:16.825 [2024-11-06 15:12:43.266064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.825 [2024-11-06 15:12:43.420272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.825 [2024-11-06 15:12:43.526902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.825 [2024-11-06 15:12:43.526959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.825 [2024-11-06 15:12:43.526974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.825 [2024-11-06 15:12:43.526988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.825 [2024-11-06 15:12:43.526998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.825 [2024-11-06 15:12:43.529148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.825 [2024-11-06 15:12:43.529188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.825 [2024-11-06 15:12:43.529215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:16.825 [2024-11-06 15:12:44.161619] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7ffb92bbd940) succeed. 00:08:16.825 [2024-11-06 15:12:44.178277] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7ffb92b79940) succeed. 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.825 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 Malloc0 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 Delay0 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 [2024-11-06 15:12:44.528571] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.084 15:12:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:17.084 [2024-11-06 15:12:44.712112] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:19.620 Initializing NVMe Controllers 00:08:19.620 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:19.620 controller IO queue size 128 less than required 00:08:19.620 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:19.620 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:19.620 Initialization complete. Launching workers. 00:08:19.620 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36615 00:08:19.620 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36676, failed to submit 62 00:08:19.620 success 36618, unsuccessful 58, failed 0 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:19.620 rmmod nvme_rdma 00:08:19.620 rmmod nvme_fabrics 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2967216 ']' 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2967216 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2967216 ']' 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2967216 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:19.620 15:12:46 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2967216 00:08:19.620 15:12:47 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:19.620 15:12:47 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:19.620 15:12:47 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2967216' 00:08:19.620 killing process with pid 2967216 00:08:19.620 15:12:47 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2967216 00:08:19.620 15:12:47 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2967216 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:21.528 00:08:21.528 real 0m12.748s 00:08:21.528 user 0m18.959s 00:08:21.528 sys 0m6.079s 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:21.528 ************************************ 00:08:21.528 END TEST nvmf_abort 00:08:21.528 ************************************ 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.528 ************************************ 00:08:21.528 START TEST nvmf_ns_hotplug_stress 00:08:21.528 ************************************ 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:21.528 * Looking for test storage... 00:08:21.528 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.528 15:12:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.528 --rc genhtml_branch_coverage=1 00:08:21.528 --rc genhtml_function_coverage=1 00:08:21.528 --rc genhtml_legend=1 00:08:21.528 --rc geninfo_all_blocks=1 00:08:21.528 --rc geninfo_unexecuted_blocks=1 00:08:21.528 00:08:21.528 ' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.528 --rc genhtml_branch_coverage=1 00:08:21.528 --rc genhtml_function_coverage=1 00:08:21.528 --rc genhtml_legend=1 00:08:21.528 --rc geninfo_all_blocks=1 00:08:21.528 --rc geninfo_unexecuted_blocks=1 00:08:21.528 00:08:21.528 ' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.528 --rc genhtml_branch_coverage=1 00:08:21.528 --rc genhtml_function_coverage=1 00:08:21.528 --rc genhtml_legend=1 00:08:21.528 --rc geninfo_all_blocks=1 00:08:21.528 --rc geninfo_unexecuted_blocks=1 00:08:21.528 00:08:21.528 ' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.528 --rc genhtml_branch_coverage=1 00:08:21.528 --rc genhtml_function_coverage=1 00:08:21.528 --rc genhtml_legend=1 00:08:21.528 --rc geninfo_all_blocks=1 00:08:21.528 --rc geninfo_unexecuted_blocks=1 00:08:21.528 00:08:21.528 ' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.528 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.529 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.529 15:12:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:29.652 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:29.652 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:29.652 Found net devices under 0000:18:00.0: mlx_0_0 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:29.652 Found net devices under 0000:18:00.1: mlx_0_1 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:29.652 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:29.653 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.653 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:29.653 altname enp24s0f0np0 00:08:29.653 altname ens785f0np0 00:08:29.653 inet 192.168.100.8/24 scope global mlx_0_0 00:08:29.653 valid_lft forever preferred_lft forever 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:29.653 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:29.653 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:29.653 altname enp24s0f1np1 00:08:29.653 altname ens785f1np1 00:08:29.653 inet 192.168.100.9/24 scope global mlx_0_1 00:08:29.653 valid_lft forever preferred_lft forever 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:29.653 15:12:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:29.653 192.168.100.9' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:29.653 192.168.100.9' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:29.653 192.168.100.9' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.653 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2970945 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2970945 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2970945 ']' 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.654 [2024-11-06 15:12:56.156920] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:29.654 [2024-11-06 15:12:56.157028] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.654 [2024-11-06 15:12:56.307498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.654 [2024-11-06 15:12:56.413961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.654 [2024-11-06 15:12:56.414022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.654 [2024-11-06 15:12:56.414035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.654 [2024-11-06 15:12:56.414049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.654 [2024-11-06 15:12:56.414059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.654 [2024-11-06 15:12:56.416334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.654 [2024-11-06 15:12:56.416394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.654 [2024-11-06 15:12:56.416420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.654 15:12:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.654 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.654 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:29.654 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:29.654 [2024-11-06 15:12:57.233651] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fa48e51d940) succeed. 00:08:29.654 [2024-11-06 15:12:57.243228] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fa48dbbd940) succeed. 00:08:29.913 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.172 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:30.430 [2024-11-06 15:12:57.858150] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:30.430 15:12:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:30.690 15:12:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:30.690 Malloc0 00:08:30.950 15:12:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.950 Delay0 00:08:30.950 15:12:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.209 15:12:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:31.467 NULL1 00:08:31.467 15:12:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:31.727 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2971343 00:08:31.727 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:31.727 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:31.727 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.986 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.986 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:31.986 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:32.244 true 00:08:32.244 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:32.244 15:12:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.503 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.762 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:32.762 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:33.021 true 00:08:33.021 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:33.021 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.021 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.279 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:33.279 15:13:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:33.538 true 00:08:33.538 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:33.538 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.796 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.055 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:34.055 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:34.055 true 00:08:34.055 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:34.055 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.315 15:13:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.573 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:34.573 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:34.832 true 00:08:34.832 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:34.832 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.832 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.091 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:35.091 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:35.349 true 00:08:35.349 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:35.349 15:13:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.607 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.866 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:35.866 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:35.866 true 00:08:35.866 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:35.866 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.124 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.383 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:36.383 15:13:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:36.642 true 00:08:36.642 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:36.642 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.901 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.901 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:36.901 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:37.159 true 00:08:37.159 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:37.159 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.418 15:13:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.677 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:37.677 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:37.677 true 00:08:37.677 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:37.677 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.936 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.194 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:38.195 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:38.456 true 00:08:38.456 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:38.456 15:13:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.456 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.739 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:38.739 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:39.019 true 00:08:39.019 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:39.019 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.277 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.277 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:39.277 15:13:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:39.536 true 00:08:39.536 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:39.536 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.794 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.053 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:40.053 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:40.053 true 00:08:40.313 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:40.313 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.313 15:13:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.571 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:40.571 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:40.830 true 00:08:40.830 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:40.830 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.089 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.347 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:41.347 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:41.347 true 00:08:41.347 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:41.347 15:13:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.606 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.864 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:41.864 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:42.122 true 00:08:42.122 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:42.122 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.381 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.381 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:42.381 15:13:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:42.640 true 00:08:42.640 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:42.640 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.899 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.157 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:43.158 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:43.417 true 00:08:43.417 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:43.417 15:13:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.417 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.676 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:43.676 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:43.934 true 00:08:43.934 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:43.934 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.193 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.452 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:44.452 15:13:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:44.452 true 00:08:44.452 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:44.452 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.711 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.970 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:44.970 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:45.228 true 00:08:45.228 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:45.228 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.487 15:13:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.487 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:45.487 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:45.746 true 00:08:45.746 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:45.746 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.005 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.264 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:46.264 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:46.264 true 00:08:46.524 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:46.524 15:13:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.524 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.783 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:46.783 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:47.041 true 00:08:47.041 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:47.041 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.300 15:13:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.559 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:47.559 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:47.559 true 00:08:47.818 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:47.818 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.818 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.076 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:48.076 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:48.335 true 00:08:48.335 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:48.335 15:13:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.594 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.853 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:48.853 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:48.853 true 00:08:49.113 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:49.113 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.113 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.372 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:49.372 15:13:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:49.631 true 00:08:49.631 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:49.631 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.890 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.149 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:50.149 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:50.149 true 00:08:50.149 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:50.149 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.408 15:13:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.667 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:50.667 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:50.926 true 00:08:50.926 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:50.926 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.926 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.185 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:51.185 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:51.444 true 00:08:51.444 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:51.444 15:13:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.703 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.963 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:51.963 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:51.963 true 00:08:51.963 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:51.963 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.221 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.479 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:52.479 15:13:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:52.738 true 00:08:52.738 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:52.738 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.997 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.997 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:52.997 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:53.256 true 00:08:53.256 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:53.256 15:13:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.514 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.773 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:53.773 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:54.033 true 00:08:54.033 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:54.033 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.033 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.291 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:54.292 15:13:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:54.550 true 00:08:54.550 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:54.550 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.810 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.069 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:55.069 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:55.069 true 00:08:55.069 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:55.069 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.328 15:13:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.587 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:55.587 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:55.846 true 00:08:55.846 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:55.846 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.106 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.106 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:56.106 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:56.365 true 00:08:56.365 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:56.365 15:13:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.625 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.884 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:56.884 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:56.884 true 00:08:57.144 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:57.144 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.144 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.404 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:57.404 15:13:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:57.664 true 00:08:57.664 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:57.664 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.923 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.923 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:57.923 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:58.182 true 00:08:58.182 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:58.182 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.441 15:13:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.700 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:58.700 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:58.700 true 00:08:58.958 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:58.958 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.959 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.217 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:59.217 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:59.476 true 00:08:59.476 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:59.476 15:13:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.735 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.995 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:59.995 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:59.995 true 00:08:59.995 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:08:59.995 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.255 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.514 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:00.514 15:13:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:00.772 true 00:09:00.772 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:09:00.772 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.772 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.031 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:01.031 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:01.289 true 00:09:01.289 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:09:01.289 15:13:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.548 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.806 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:01.806 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:01.806 true 00:09:01.806 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:09:01.806 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.065 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.323 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:09:02.323 15:13:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:02.582 true 00:09:02.582 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:09:02.582 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.841 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.841 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:09:02.841 Initializing NVMe Controllers 00:09:02.841 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.841 Controller IO queue size 128, less than required. 00:09:02.841 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.841 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:02.841 Initialization complete. Launching workers. 00:09:02.841 ======================================================== 00:09:02.841 Latency(us) 00:09:02.841 Device Information : IOPS MiB/s Average min max 00:09:02.841 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34861.90 17.02 3671.55 2111.21 5160.51 00:09:02.841 ======================================================== 00:09:02.841 Total : 34861.90 17.02 3671.55 2111.21 5160.51 00:09:02.841 00:09:02.841 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:03.100 true 00:09:03.100 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2971343 00:09:03.100 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2971343) - No such process 00:09:03.100 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2971343 00:09:03.100 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.359 15:13:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.618 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:03.618 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:03.618 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:03.618 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.618 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:03.878 null0 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:03.878 null1 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:03.878 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:04.137 null2 00:09:04.137 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.137 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.137 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:04.395 null3 00:09:04.395 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.395 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.395 15:13:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:04.654 null4 00:09:04.654 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.654 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.654 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:04.654 null5 00:09:04.913 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.913 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.914 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:04.914 null6 00:09:04.914 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:04.914 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:04.914 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:05.173 null7 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.173 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2976030 2976031 2976034 2976036 2976037 2976039 2976041 2976043 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.174 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.432 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.432 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.432 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:05.432 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.432 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:05.433 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:05.433 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.433 15:13:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.690 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:05.949 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.207 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:06.208 15:13:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.466 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:06.725 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.984 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.985 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.244 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.503 15:13:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.503 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.762 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.022 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.281 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.541 15:13:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:08.541 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.799 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.058 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:09.317 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:09.577 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:09.577 rmmod nvme_rdma 00:09:09.577 rmmod nvme_fabrics 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2970945 ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2970945 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2970945 ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2970945 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2970945 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2970945' 00:09:09.577 killing process with pid 2970945 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2970945 00:09:09.577 15:13:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2970945 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:11.481 00:09:11.481 real 0m49.969s 00:09:11.481 user 3m34.826s 00:09:11.481 sys 0m17.135s 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.481 ************************************ 00:09:11.481 END TEST nvmf_ns_hotplug_stress 00:09:11.481 ************************************ 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:11.481 15:13:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:11.482 15:13:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.482 15:13:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.482 ************************************ 00:09:11.482 START TEST nvmf_delete_subsystem 00:09:11.482 ************************************ 00:09:11.482 15:13:38 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:11.482 * Looking for test storage... 00:09:11.482 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.482 --rc genhtml_branch_coverage=1 00:09:11.482 --rc genhtml_function_coverage=1 00:09:11.482 --rc genhtml_legend=1 00:09:11.482 --rc geninfo_all_blocks=1 00:09:11.482 --rc geninfo_unexecuted_blocks=1 00:09:11.482 00:09:11.482 ' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.482 --rc genhtml_branch_coverage=1 00:09:11.482 --rc genhtml_function_coverage=1 00:09:11.482 --rc genhtml_legend=1 00:09:11.482 --rc geninfo_all_blocks=1 00:09:11.482 --rc geninfo_unexecuted_blocks=1 00:09:11.482 00:09:11.482 ' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.482 --rc genhtml_branch_coverage=1 00:09:11.482 --rc genhtml_function_coverage=1 00:09:11.482 --rc genhtml_legend=1 00:09:11.482 --rc geninfo_all_blocks=1 00:09:11.482 --rc geninfo_unexecuted_blocks=1 00:09:11.482 00:09:11.482 ' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.482 --rc genhtml_branch_coverage=1 00:09:11.482 --rc genhtml_function_coverage=1 00:09:11.482 --rc genhtml_legend=1 00:09:11.482 --rc geninfo_all_blocks=1 00:09:11.482 --rc geninfo_unexecuted_blocks=1 00:09:11.482 00:09:11.482 ' 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.482 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:11.741 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.741 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.741 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.741 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.741 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.742 15:13:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:18.313 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:18.314 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:18.314 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:18.314 Found net devices under 0000:18:00.0: mlx_0_0 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:18.314 Found net devices under 0000:18:00.1: mlx_0_1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:18.314 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.314 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:18.314 altname enp24s0f0np0 00:09:18.314 altname ens785f0np0 00:09:18.314 inet 192.168.100.8/24 scope global mlx_0_0 00:09:18.314 valid_lft forever preferred_lft forever 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:18.314 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:18.574 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.574 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:18.574 altname enp24s0f1np1 00:09:18.574 altname ens785f1np1 00:09:18.574 inet 192.168.100.9/24 scope global mlx_0_1 00:09:18.574 valid_lft forever preferred_lft forever 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.574 15:13:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:18.574 192.168.100.9' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:18.574 192.168.100.9' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:18.574 192.168.100.9' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:18.574 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2979897 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2979897 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2979897 ']' 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.575 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.575 [2024-11-06 15:13:46.176376] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:18.575 [2024-11-06 15:13:46.176489] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.834 [2024-11-06 15:13:46.330262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.834 [2024-11-06 15:13:46.435418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.834 [2024-11-06 15:13:46.435472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.834 [2024-11-06 15:13:46.435485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.834 [2024-11-06 15:13:46.435498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.834 [2024-11-06 15:13:46.435511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.834 [2024-11-06 15:13:46.437439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.834 [2024-11-06 15:13:46.437465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.401 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.401 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:09:19.401 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.401 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.401 15:13:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.401 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.401 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:19.401 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.401 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 [2024-11-06 15:13:47.060314] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7fad43bbd940) succeed. 00:09:19.660 [2024-11-06 15:13:47.069539] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7fad43b79940) succeed. 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 [2024-11-06 15:13:47.244306] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 NULL1 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.660 Delay0 00:09:19.660 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2980051 00:09:19.661 15:13:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:19.920 [2024-11-06 15:13:47.418304] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:21.823 15:13:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.823 15:13:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.823 15:13:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 NVMe io qpair process completion error 00:09:23.204 15:13:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.204 15:13:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:23.204 15:13:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2980051 00:09:23.204 15:13:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:23.463 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:23.463 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2980051 00:09:23.463 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 starting I/O failed: -6 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.032 Write completed with error (sct=0, sc=8) 00:09:24.032 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 starting I/O failed: -6 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Write completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.033 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Write completed with error (sct=0, sc=8) 00:09:24.034 Read completed with error (sct=0, sc=8) 00:09:24.034 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:24.034 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2980051 00:09:24.034 15:13:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:24.034 Initializing NVMe Controllers 00:09:24.034 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.034 Controller IO queue size 128, less than required. 00:09:24.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:24.034 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:24.034 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:24.034 Initialization complete. Launching workers. 00:09:24.034 ======================================================== 00:09:24.034 Latency(us) 00:09:24.034 Device Information : IOPS MiB/s Average min max 00:09:24.034 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.65 0.04 1591515.68 1000258.77 2967100.07 00:09:24.034 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.65 0.04 1593228.94 1001647.73 2968749.04 00:09:24.034 ======================================================== 00:09:24.034 Total : 161.29 0.08 1592372.31 1000258.77 2968749.04 00:09:24.034 00:09:24.034 [2024-11-06 15:13:51.563597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:24.034 [2024-11-06 15:13:51.563667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:09:24.034 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2980051 00:09:24.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2980051) - No such process 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2980051 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2980051 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:24.696 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2980051 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.697 [2024-11-06 15:13:52.061537] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2980771 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:24.697 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.697 [2024-11-06 15:13:52.223024] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:25.011 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.011 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:25.011 15:13:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.602 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.602 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:25.602 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.170 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.170 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:26.170 15:13:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.736 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.736 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:26.736 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.995 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.995 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:26.995 15:13:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.562 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.562 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:27.562 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.129 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.129 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:28.129 15:13:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.696 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.696 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:28.696 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.264 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.264 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:29.264 15:13:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.524 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.524 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:29.524 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.093 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.093 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:30.093 15:13:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.660 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.660 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:30.660 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.227 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.227 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:31.227 15:13:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.795 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.795 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:31.795 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.795 Initializing NVMe Controllers 00:09:31.795 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.795 Controller IO queue size 128, less than required. 00:09:31.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:31.795 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:31.795 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:31.795 Initialization complete. Launching workers. 00:09:31.795 ======================================================== 00:09:31.795 Latency(us) 00:09:31.795 Device Information : IOPS MiB/s Average min max 00:09:31.795 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001861.38 1000063.78 1005455.03 00:09:31.795 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002998.28 1000166.05 1008069.60 00:09:31.795 ======================================================== 00:09:31.795 Total : 256.00 0.12 1002429.83 1000063.78 1008069.60 00:09:31.795 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2980771 00:09:32.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2980771) - No such process 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2980771 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.054 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:32.054 rmmod nvme_rdma 00:09:32.313 rmmod nvme_fabrics 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2979897 ']' 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2979897 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2979897 ']' 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2979897 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2979897 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2979897' 00:09:32.313 killing process with pid 2979897 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2979897 00:09:32.313 15:13:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2979897 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:33.692 00:09:33.692 real 0m22.325s 00:09:33.692 user 0m52.436s 00:09:33.692 sys 0m6.894s 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:33.692 ************************************ 00:09:33.692 END TEST nvmf_delete_subsystem 00:09:33.692 ************************************ 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.692 ************************************ 00:09:33.692 START TEST nvmf_host_management 00:09:33.692 ************************************ 00:09:33.692 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:33.951 * Looking for test storage... 00:09:33.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:33.951 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.952 --rc genhtml_branch_coverage=1 00:09:33.952 --rc genhtml_function_coverage=1 00:09:33.952 --rc genhtml_legend=1 00:09:33.952 --rc geninfo_all_blocks=1 00:09:33.952 --rc geninfo_unexecuted_blocks=1 00:09:33.952 00:09:33.952 ' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.952 --rc genhtml_branch_coverage=1 00:09:33.952 --rc genhtml_function_coverage=1 00:09:33.952 --rc genhtml_legend=1 00:09:33.952 --rc geninfo_all_blocks=1 00:09:33.952 --rc geninfo_unexecuted_blocks=1 00:09:33.952 00:09:33.952 ' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.952 --rc genhtml_branch_coverage=1 00:09:33.952 --rc genhtml_function_coverage=1 00:09:33.952 --rc genhtml_legend=1 00:09:33.952 --rc geninfo_all_blocks=1 00:09:33.952 --rc geninfo_unexecuted_blocks=1 00:09:33.952 00:09:33.952 ' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.952 --rc genhtml_branch_coverage=1 00:09:33.952 --rc genhtml_function_coverage=1 00:09:33.952 --rc genhtml_legend=1 00:09:33.952 --rc geninfo_all_blocks=1 00:09:33.952 --rc geninfo_unexecuted_blocks=1 00:09:33.952 00:09:33.952 ' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.952 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.953 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.953 15:14:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:42.077 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:42.077 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:42.077 Found net devices under 0000:18:00.0: mlx_0_0 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:42.077 Found net devices under 0000:18:00.1: mlx_0_1 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:42.077 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:42.078 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:42.078 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:42.078 altname enp24s0f0np0 00:09:42.078 altname ens785f0np0 00:09:42.078 inet 192.168.100.8/24 scope global mlx_0_0 00:09:42.078 valid_lft forever preferred_lft forever 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:42.078 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:42.078 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:42.078 altname enp24s0f1np1 00:09:42.078 altname ens785f1np1 00:09:42.078 inet 192.168.100.9/24 scope global mlx_0_1 00:09:42.078 valid_lft forever preferred_lft forever 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:42.078 192.168.100.9' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:42.078 192.168.100.9' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:42.078 192.168.100.9' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2984974 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2984974 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2984974 ']' 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.078 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.079 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.079 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.079 15:14:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 [2024-11-06 15:14:08.610867] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:42.079 [2024-11-06 15:14:08.610985] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.079 [2024-11-06 15:14:08.763652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.079 [2024-11-06 15:14:08.876774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.079 [2024-11-06 15:14:08.876832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.079 [2024-11-06 15:14:08.876846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.079 [2024-11-06 15:14:08.876859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.079 [2024-11-06 15:14:08.876868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.079 [2024-11-06 15:14:08.879405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.079 [2024-11-06 15:14:08.879493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.079 [2024-11-06 15:14:08.879563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.079 [2024-11-06 15:14:08.879590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.079 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.079 [2024-11-06 15:14:09.487983] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f8f3bbbd940) succeed. 00:09:42.079 [2024-11-06 15:14:09.497598] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f8f3bb79940) succeed. 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.338 Malloc0 00:09:42.338 [2024-11-06 15:14:09.911060] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2985203 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2985203 /var/tmp/bdevperf.sock 00:09:42.338 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2985203 ']' 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.597 { 00:09:42.597 "params": { 00:09:42.597 "name": "Nvme$subsystem", 00:09:42.597 "trtype": "$TEST_TRANSPORT", 00:09:42.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.597 "adrfam": "ipv4", 00:09:42.597 "trsvcid": "$NVMF_PORT", 00:09:42.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.597 "hdgst": ${hdgst:-false}, 00:09:42.597 "ddgst": ${ddgst:-false} 00:09:42.597 }, 00:09:42.597 "method": "bdev_nvme_attach_controller" 00:09:42.597 } 00:09:42.597 EOF 00:09:42.597 )") 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:42.597 15:14:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.597 "params": { 00:09:42.597 "name": "Nvme0", 00:09:42.597 "trtype": "rdma", 00:09:42.597 "traddr": "192.168.100.8", 00:09:42.597 "adrfam": "ipv4", 00:09:42.597 "trsvcid": "4420", 00:09:42.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:42.597 "hdgst": false, 00:09:42.597 "ddgst": false 00:09:42.597 }, 00:09:42.597 "method": "bdev_nvme_attach_controller" 00:09:42.597 }' 00:09:42.597 [2024-11-06 15:14:10.061098] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:42.597 [2024-11-06 15:14:10.061211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985203 ] 00:09:42.597 [2024-11-06 15:14:10.213285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.856 [2024-11-06 15:14:10.328147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.114 Running I/O for 10 seconds... 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=434 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 434 -ge 100 ']' 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.373 15:14:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:44.570 527.00 IOPS, 32.94 MiB/s [2024-11-06T14:14:12.205Z] [2024-11-06 15:14:11.996366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000167afcc0 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001679fc00 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001678fb40 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001677fa80 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001676f9c0 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001675f900 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001674f840 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001673f780 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001672f6c0 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001671f600 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001670f540 len:0x10000 key:0x182300 00:09:44.570 [2024-11-06 15:14:11.996774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084b380 len:0x10000 key:0x181d00 00:09:44.570 [2024-11-06 15:14:11.996803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083b2c0 len:0x10000 key:0x181d00 00:09:44.570 [2024-11-06 15:14:11.996834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082b200 len:0x10000 key:0x181d00 00:09:44.570 [2024-11-06 15:14:11.996863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081b140 len:0x10000 key:0x181d00 00:09:44.570 [2024-11-06 15:14:11.996892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080b080 len:0x10000 key:0x181d00 00:09:44.570 [2024-11-06 15:14:11.996920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cefd00 len:0x10000 key:0x182500 00:09:44.570 [2024-11-06 15:14:11.996948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b79f000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.996975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.996991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9e000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7d000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb5c000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb3b000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb1a000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf9000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad8000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab7000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba96000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba75000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba54000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba33000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba12000 len:0x10000 key:0x182100 00:09:44.570 [2024-11-06 15:14:11.997577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.570 [2024-11-06 15:14:11.997592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9f1000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9d0000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9af000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdae000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8d000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd6c000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd4b000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd2a000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd09000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce8000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc7000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca6000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.997917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.997932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc85000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc64000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc43000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc22000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc01000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbe0000 len:0x10000 key:0x182100 00:09:44.571 [2024-11-06 15:14:11.998229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cdfc40 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ccfb80 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cbfac0 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cafa00 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c9f940 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c8f880 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c7f7c0 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c6f700 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c5f640 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c4f580 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c3f4c0 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c2f400 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c1f340 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c0f280 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 [2024-11-06 15:14:11.998624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016fd2e00 len:0x10000 key:0x182500 00:09:44.571 [2024-11-06 15:14:11.998636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:44.571 15:14:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2985203 00:09:44.571 15:14:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.571 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.571 { 00:09:44.571 "params": { 00:09:44.571 "name": "Nvme$subsystem", 00:09:44.571 "trtype": "$TEST_TRANSPORT", 00:09:44.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.571 "adrfam": "ipv4", 00:09:44.571 "trsvcid": "$NVMF_PORT", 00:09:44.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.572 "hdgst": ${hdgst:-false}, 00:09:44.572 "ddgst": ${ddgst:-false} 00:09:44.572 }, 00:09:44.572 "method": "bdev_nvme_attach_controller" 00:09:44.572 } 00:09:44.572 EOF 00:09:44.572 )") 00:09:44.572 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:44.572 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:44.572 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:44.572 15:14:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.572 "params": { 00:09:44.572 "name": "Nvme0", 00:09:44.572 "trtype": "rdma", 00:09:44.572 "traddr": "192.168.100.8", 00:09:44.572 "adrfam": "ipv4", 00:09:44.572 "trsvcid": "4420", 00:09:44.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:44.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:44.572 "hdgst": false, 00:09:44.572 "ddgst": false 00:09:44.572 }, 00:09:44.572 "method": "bdev_nvme_attach_controller" 00:09:44.572 }' 00:09:44.572 [2024-11-06 15:14:12.092242] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:44.572 [2024-11-06 15:14:12.092342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985406 ] 00:09:44.830 [2024-11-06 15:14:12.242529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.830 [2024-11-06 15:14:12.355852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.398 Running I/O for 1 seconds... 00:09:46.333 2605.00 IOPS, 162.81 MiB/s 00:09:46.333 Latency(us) 00:09:46.333 [2024-11-06T14:14:13.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.333 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:46.333 Verification LBA range: start 0x0 length 0x400 00:09:46.333 Nvme0n1 : 1.01 2644.34 165.27 0.00 0.00 23686.54 1210.99 40119.43 00:09:46.333 [2024-11-06T14:14:13.968Z] =================================================================================================================== 00:09:46.333 [2024-11-06T14:14:13.968Z] Total : 2644.34 165.27 0.00 0.00 23686.54 1210.99 40119.43 00:09:47.270 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2985203 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:47.270 rmmod nvme_rdma 00:09:47.270 rmmod nvme_fabrics 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2984974 ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2984974 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2984974 ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2984974 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2984974 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2984974' 00:09:47.270 killing process with pid 2984974 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2984974 00:09:47.270 15:14:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2984974 00:09:49.175 [2024-11-06 15:14:16.674344] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:49.175 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.175 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:49.175 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:49.175 00:09:49.175 real 0m15.449s 00:09:49.175 user 0m35.896s 00:09:49.175 sys 0m7.004s 00:09:49.175 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.175 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.175 ************************************ 00:09:49.175 END TEST nvmf_host_management 00:09:49.175 ************************************ 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.435 ************************************ 00:09:49.435 START TEST nvmf_lvol 00:09:49.435 ************************************ 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:49.435 * Looking for test storage... 00:09:49.435 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:09:49.435 15:14:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.435 --rc genhtml_branch_coverage=1 00:09:49.435 --rc genhtml_function_coverage=1 00:09:49.435 --rc genhtml_legend=1 00:09:49.435 --rc geninfo_all_blocks=1 00:09:49.435 --rc geninfo_unexecuted_blocks=1 00:09:49.435 00:09:49.435 ' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.435 --rc genhtml_branch_coverage=1 00:09:49.435 --rc genhtml_function_coverage=1 00:09:49.435 --rc genhtml_legend=1 00:09:49.435 --rc geninfo_all_blocks=1 00:09:49.435 --rc geninfo_unexecuted_blocks=1 00:09:49.435 00:09:49.435 ' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.435 --rc genhtml_branch_coverage=1 00:09:49.435 --rc genhtml_function_coverage=1 00:09:49.435 --rc genhtml_legend=1 00:09:49.435 --rc geninfo_all_blocks=1 00:09:49.435 --rc geninfo_unexecuted_blocks=1 00:09:49.435 00:09:49.435 ' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:49.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.435 --rc genhtml_branch_coverage=1 00:09:49.435 --rc genhtml_function_coverage=1 00:09:49.435 --rc genhtml_legend=1 00:09:49.435 --rc geninfo_all_blocks=1 00:09:49.435 --rc geninfo_unexecuted_blocks=1 00:09:49.435 00:09:49.435 ' 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.435 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.695 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.695 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.696 15:14:17 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.264 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:56.265 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:56.265 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:56.265 Found net devices under 0000:18:00.0: mlx_0_0 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:56.265 Found net devices under 0000:18:00.1: mlx_0_1 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:56.265 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:56.525 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.525 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:56.525 altname enp24s0f0np0 00:09:56.525 altname ens785f0np0 00:09:56.525 inet 192.168.100.8/24 scope global mlx_0_0 00:09:56.525 valid_lft forever preferred_lft forever 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:56.525 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.525 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:56.525 altname enp24s0f1np1 00:09:56.525 altname ens785f1np1 00:09:56.525 inet 192.168.100.9/24 scope global mlx_0_1 00:09:56.525 valid_lft forever preferred_lft forever 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:56.525 15:14:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:56.525 192.168.100.9' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:56.525 192.168.100.9' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:56.525 192.168.100.9' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2989045 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:56.525 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2989045 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2989045 ']' 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.526 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.526 [2024-11-06 15:14:24.154687] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:56.526 [2024-11-06 15:14:24.154821] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.784 [2024-11-06 15:14:24.306757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.784 [2024-11-06 15:14:24.414273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.784 [2024-11-06 15:14:24.414328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.784 [2024-11-06 15:14:24.414340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.784 [2024-11-06 15:14:24.414354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.784 [2024-11-06 15:14:24.414363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.784 [2024-11-06 15:14:24.416559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.784 [2024-11-06 15:14:24.416619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.785 [2024-11-06 15:14:24.416645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.353 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:57.353 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:09:57.353 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.353 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.353 15:14:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:57.612 15:14:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.612 15:14:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:57.612 [2024-11-06 15:14:25.235831] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fa492bbd940) succeed. 00:09:57.612 [2024-11-06 15:14:25.245267] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fa492b79940) succeed. 00:09:57.871 15:14:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.130 15:14:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:58.130 15:14:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.699 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:58.699 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:58.699 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:58.958 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=004321e4-c5af-4410-bb85-d3391e7b4da7 00:09:58.958 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 004321e4-c5af-4410-bb85-d3391e7b4da7 lvol 20 00:09:59.217 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5b77a9dd-52c2-46af-8d50-67d2e213f106 00:09:59.217 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:59.476 15:14:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b77a9dd-52c2-46af-8d50-67d2e213f106 00:09:59.476 15:14:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:59.735 [2024-11-06 15:14:27.255345] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:59.735 15:14:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:59.994 15:14:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2989453 00:09:59.994 15:14:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:59.994 15:14:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:00.931 15:14:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5b77a9dd-52c2-46af-8d50-67d2e213f106 MY_SNAPSHOT 00:10:01.190 15:14:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b99cb7d2-840e-4261-9413-f504cbef0621 00:10:01.190 15:14:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5b77a9dd-52c2-46af-8d50-67d2e213f106 30 00:10:01.449 15:14:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b99cb7d2-840e-4261-9413-f504cbef0621 MY_CLONE 00:10:01.707 15:14:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8f8f09b-8b7d-4bbe-9551-916f09ce403a 00:10:01.707 15:14:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8f8f09b-8b7d-4bbe-9551-916f09ce403a 00:10:01.967 15:14:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2989453 00:10:11.996 Initializing NVMe Controllers 00:10:11.996 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:11.996 Controller IO queue size 128, less than required. 00:10:11.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:11.996 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:11.996 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:11.996 Initialization complete. Launching workers. 00:10:11.996 ======================================================== 00:10:11.996 Latency(us) 00:10:11.996 Device Information : IOPS MiB/s Average min max 00:10:11.996 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15023.50 58.69 8521.93 3664.04 149938.65 00:10:11.996 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14885.90 58.15 8599.41 3868.50 163703.99 00:10:11.996 ======================================================== 00:10:11.996 Total : 29909.39 116.83 8560.49 3664.04 163703.99 00:10:11.996 00:10:11.996 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:11.996 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b77a9dd-52c2-46af-8d50-67d2e213f106 00:10:11.996 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 004321e4-c5af-4410-bb85-d3391e7b4da7 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:12.254 rmmod nvme_rdma 00:10:12.254 rmmod nvme_fabrics 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2989045 ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2989045 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2989045 ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2989045 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2989045 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2989045' 00:10:12.254 killing process with pid 2989045 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2989045 00:10:12.254 15:14:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2989045 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:14.158 00:10:14.158 real 0m24.853s 00:10:14.158 user 1m17.708s 00:10:14.158 sys 0m6.990s 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:14.158 ************************************ 00:10:14.158 END TEST nvmf_lvol 00:10:14.158 ************************************ 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:14.158 15:14:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.418 ************************************ 00:10:14.418 START TEST nvmf_lvs_grow 00:10:14.418 ************************************ 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:14.418 * Looking for test storage... 00:10:14.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:14.418 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.419 --rc genhtml_branch_coverage=1 00:10:14.419 --rc genhtml_function_coverage=1 00:10:14.419 --rc genhtml_legend=1 00:10:14.419 --rc geninfo_all_blocks=1 00:10:14.419 --rc geninfo_unexecuted_blocks=1 00:10:14.419 00:10:14.419 ' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.419 --rc genhtml_branch_coverage=1 00:10:14.419 --rc genhtml_function_coverage=1 00:10:14.419 --rc genhtml_legend=1 00:10:14.419 --rc geninfo_all_blocks=1 00:10:14.419 --rc geninfo_unexecuted_blocks=1 00:10:14.419 00:10:14.419 ' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.419 --rc genhtml_branch_coverage=1 00:10:14.419 --rc genhtml_function_coverage=1 00:10:14.419 --rc genhtml_legend=1 00:10:14.419 --rc geninfo_all_blocks=1 00:10:14.419 --rc geninfo_unexecuted_blocks=1 00:10:14.419 00:10:14.419 ' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.419 --rc genhtml_branch_coverage=1 00:10:14.419 --rc genhtml_function_coverage=1 00:10:14.419 --rc genhtml_legend=1 00:10:14.419 --rc geninfo_all_blocks=1 00:10:14.419 --rc geninfo_unexecuted_blocks=1 00:10:14.419 00:10:14.419 ' 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.419 15:14:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.419 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.420 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.420 15:14:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:22.543 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:22.543 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.543 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:22.543 Found net devices under 0000:18:00.0: mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:22.544 Found net devices under 0000:18:00.1: mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:22.544 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.544 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:22.544 altname enp24s0f0np0 00:10:22.544 altname ens785f0np0 00:10:22.544 inet 192.168.100.8/24 scope global mlx_0_0 00:10:22.544 valid_lft forever preferred_lft forever 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:22.544 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:22.544 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:22.544 altname enp24s0f1np1 00:10:22.544 altname ens785f1np1 00:10:22.544 inet 192.168.100.9/24 scope global mlx_0_1 00:10:22.544 valid_lft forever preferred_lft forever 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:22.544 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:22.545 192.168.100.9' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:22.545 192.168.100.9' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:22.545 192.168.100.9' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2994375 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2994375 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2994375 ']' 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:22.545 15:14:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.545 [2024-11-06 15:14:49.079730] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:22.545 [2024-11-06 15:14:49.079837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.545 [2024-11-06 15:14:49.227030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.545 [2024-11-06 15:14:49.330620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.545 [2024-11-06 15:14:49.330677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.545 [2024-11-06 15:14:49.330691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.545 [2024-11-06 15:14:49.330705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.545 [2024-11-06 15:14:49.330714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.545 [2024-11-06 15:14:49.332097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.545 15:14:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:22.545 [2024-11-06 15:14:50.131482] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7fea7df3e940) succeed. 00:10:22.545 [2024-11-06 15:14:50.141150] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7fea7ddbd940) succeed. 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.804 ************************************ 00:10:22.804 START TEST lvs_grow_clean 00:10:22.804 ************************************ 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:22.804 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.063 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:23.063 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:23.063 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:23.322 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:23.322 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:23.322 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:23.322 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:23.322 15:14:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 lvol 150 00:10:23.581 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2884a858-a2b2-4246-8493-fb372c122720 00:10:23.581 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:23.581 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:23.953 [2024-11-06 15:14:51.295868] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:23.953 [2024-11-06 15:14:51.295948] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:23.953 true 00:10:23.953 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:23.953 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:23.953 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:23.953 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:24.268 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2884a858-a2b2-4246-8493-fb372c122720 00:10:24.268 15:14:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:24.527 [2024-11-06 15:14:52.046450] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:24.527 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2994799 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2994799 /var/tmp/bdevperf.sock 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2994799 ']' 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:24.786 15:14:52 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:24.786 [2024-11-06 15:14:52.348803] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:24.786 [2024-11-06 15:14:52.348902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994799 ] 00:10:25.046 [2024-11-06 15:14:52.494247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.046 [2024-11-06 15:14:52.602917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.614 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:25.614 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:25.615 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:25.873 Nvme0n1 00:10:25.873 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.132 [ 00:10:26.132 { 00:10:26.132 "name": "Nvme0n1", 00:10:26.132 "aliases": [ 00:10:26.132 "2884a858-a2b2-4246-8493-fb372c122720" 00:10:26.132 ], 00:10:26.132 "product_name": "NVMe disk", 00:10:26.132 "block_size": 4096, 00:10:26.132 "num_blocks": 38912, 00:10:26.132 "uuid": "2884a858-a2b2-4246-8493-fb372c122720", 00:10:26.132 "numa_id": 0, 00:10:26.132 "assigned_rate_limits": { 00:10:26.132 "rw_ios_per_sec": 0, 00:10:26.132 "rw_mbytes_per_sec": 0, 00:10:26.132 "r_mbytes_per_sec": 0, 00:10:26.132 "w_mbytes_per_sec": 0 00:10:26.132 }, 00:10:26.132 "claimed": false, 00:10:26.132 "zoned": false, 00:10:26.132 "supported_io_types": { 00:10:26.132 "read": true, 00:10:26.132 "write": true, 00:10:26.132 "unmap": true, 00:10:26.132 "flush": true, 00:10:26.132 "reset": true, 00:10:26.132 "nvme_admin": true, 00:10:26.132 "nvme_io": true, 00:10:26.132 "nvme_io_md": false, 00:10:26.132 "write_zeroes": true, 00:10:26.132 "zcopy": false, 00:10:26.132 "get_zone_info": false, 00:10:26.132 "zone_management": false, 00:10:26.132 "zone_append": false, 00:10:26.132 "compare": true, 00:10:26.132 "compare_and_write": true, 00:10:26.132 "abort": true, 00:10:26.132 "seek_hole": false, 00:10:26.132 "seek_data": false, 00:10:26.132 "copy": true, 00:10:26.132 "nvme_iov_md": false 00:10:26.132 }, 00:10:26.132 "memory_domains": [ 00:10:26.132 { 00:10:26.132 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:26.132 "dma_device_type": 0 00:10:26.132 } 00:10:26.132 ], 00:10:26.132 "driver_specific": { 00:10:26.132 "nvme": [ 00:10:26.132 { 00:10:26.132 "trid": { 00:10:26.132 "trtype": "RDMA", 00:10:26.132 "adrfam": "IPv4", 00:10:26.132 "traddr": "192.168.100.8", 00:10:26.132 "trsvcid": "4420", 00:10:26.132 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.132 }, 00:10:26.132 "ctrlr_data": { 00:10:26.132 "cntlid": 1, 00:10:26.132 "vendor_id": "0x8086", 00:10:26.132 "model_number": "SPDK bdev Controller", 00:10:26.132 "serial_number": "SPDK0", 00:10:26.132 "firmware_revision": "25.01", 00:10:26.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.132 "oacs": { 00:10:26.132 "security": 0, 00:10:26.132 "format": 0, 00:10:26.132 "firmware": 0, 00:10:26.132 "ns_manage": 0 00:10:26.132 }, 00:10:26.132 "multi_ctrlr": true, 00:10:26.132 "ana_reporting": false 00:10:26.132 }, 00:10:26.132 "vs": { 00:10:26.132 "nvme_version": "1.3" 00:10:26.132 }, 00:10:26.132 "ns_data": { 00:10:26.132 "id": 1, 00:10:26.132 "can_share": true 00:10:26.132 } 00:10:26.132 } 00:10:26.132 ], 00:10:26.132 "mp_policy": "active_passive" 00:10:26.132 } 00:10:26.132 } 00:10:26.132 ] 00:10:26.132 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.132 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2994985 00:10:26.132 15:14:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:26.132 Running I/O for 10 seconds... 00:10:27.512 Latency(us) 00:10:27.512 [2024-11-06T14:14:55.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.512 Nvme0n1 : 1.00 29184.00 114.00 0.00 0.00 0.00 0.00 0.00 00:10:27.512 [2024-11-06T14:14:55.147Z] =================================================================================================================== 00:10:27.512 [2024-11-06T14:14:55.147Z] Total : 29184.00 114.00 0.00 0.00 0.00 0.00 0.00 00:10:27.512 00:10:28.080 15:14:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:28.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.339 Nvme0n1 : 2.00 29392.00 114.81 0.00 0.00 0.00 0.00 0.00 00:10:28.339 [2024-11-06T14:14:55.974Z] =================================================================================================================== 00:10:28.339 [2024-11-06T14:14:55.974Z] Total : 29392.00 114.81 0.00 0.00 0.00 0.00 0.00 00:10:28.339 00:10:28.339 true 00:10:28.339 15:14:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:28.339 15:14:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:28.598 15:14:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:28.598 15:14:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:28.598 15:14:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2994985 00:10:29.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.166 Nvme0n1 : 3.00 29450.67 115.04 0.00 0.00 0.00 0.00 0.00 00:10:29.166 [2024-11-06T14:14:56.801Z] =================================================================================================================== 00:10:29.166 [2024-11-06T14:14:56.801Z] Total : 29450.67 115.04 0.00 0.00 0.00 0.00 0.00 00:10:29.166 00:10:30.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.105 Nvme0n1 : 4.00 29496.50 115.22 0.00 0.00 0.00 0.00 0.00 00:10:30.105 [2024-11-06T14:14:57.740Z] =================================================================================================================== 00:10:30.105 [2024-11-06T14:14:57.740Z] Total : 29496.50 115.22 0.00 0.00 0.00 0.00 0.00 00:10:30.105 00:10:31.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.483 Nvme0n1 : 5.00 29568.40 115.50 0.00 0.00 0.00 0.00 0.00 00:10:31.483 [2024-11-06T14:14:59.118Z] =================================================================================================================== 00:10:31.483 [2024-11-06T14:14:59.118Z] Total : 29568.40 115.50 0.00 0.00 0.00 0.00 0.00 00:10:31.483 00:10:32.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.420 Nvme0n1 : 6.00 29610.50 115.67 0.00 0.00 0.00 0.00 0.00 00:10:32.420 [2024-11-06T14:15:00.055Z] =================================================================================================================== 00:10:32.420 [2024-11-06T14:15:00.055Z] Total : 29610.50 115.67 0.00 0.00 0.00 0.00 0.00 00:10:32.420 00:10:33.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.356 Nvme0n1 : 7.00 29572.43 115.52 0.00 0.00 0.00 0.00 0.00 00:10:33.356 [2024-11-06T14:15:00.991Z] =================================================================================================================== 00:10:33.356 [2024-11-06T14:15:00.991Z] Total : 29572.43 115.52 0.00 0.00 0.00 0.00 0.00 00:10:33.356 00:10:34.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.293 Nvme0n1 : 8.00 29580.12 115.55 0.00 0.00 0.00 0.00 0.00 00:10:34.293 [2024-11-06T14:15:01.928Z] =================================================================================================================== 00:10:34.293 [2024-11-06T14:15:01.928Z] Total : 29580.12 115.55 0.00 0.00 0.00 0.00 0.00 00:10:34.293 00:10:35.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.231 Nvme0n1 : 9.00 29614.33 115.68 0.00 0.00 0.00 0.00 0.00 00:10:35.231 [2024-11-06T14:15:02.866Z] =================================================================================================================== 00:10:35.231 [2024-11-06T14:15:02.866Z] Total : 29614.33 115.68 0.00 0.00 0.00 0.00 0.00 00:10:35.231 00:10:36.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.168 Nvme0n1 : 10.00 29654.00 115.84 0.00 0.00 0.00 0.00 0.00 00:10:36.168 [2024-11-06T14:15:03.803Z] =================================================================================================================== 00:10:36.168 [2024-11-06T14:15:03.803Z] Total : 29654.00 115.84 0.00 0.00 0.00 0.00 0.00 00:10:36.168 00:10:36.168 00:10:36.168 Latency(us) 00:10:36.168 [2024-11-06T14:15:03.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.169 Nvme0n1 : 10.00 29653.80 115.84 0.00 0.00 4313.08 2735.42 11796.48 00:10:36.169 [2024-11-06T14:15:03.804Z] =================================================================================================================== 00:10:36.169 [2024-11-06T14:15:03.804Z] Total : 29653.80 115.84 0.00 0.00 4313.08 2735.42 11796.48 00:10:36.169 { 00:10:36.169 "results": [ 00:10:36.169 { 00:10:36.169 "job": "Nvme0n1", 00:10:36.169 "core_mask": "0x2", 00:10:36.169 "workload": "randwrite", 00:10:36.169 "status": "finished", 00:10:36.169 "queue_depth": 128, 00:10:36.169 "io_size": 4096, 00:10:36.169 "runtime": 10.003542, 00:10:36.169 "iops": 29653.79662523534, 00:10:36.169 "mibps": 115.83514306732555, 00:10:36.169 "io_failed": 0, 00:10:36.169 "io_timeout": 0, 00:10:36.169 "avg_latency_us": 4313.079392911022, 00:10:36.169 "min_latency_us": 2735.4156521739133, 00:10:36.169 "max_latency_us": 11796.48 00:10:36.169 } 00:10:36.169 ], 00:10:36.169 "core_count": 1 00:10:36.169 } 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2994799 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2994799 ']' 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2994799 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:36.169 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2994799 00:10:36.428 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:36.428 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:36.428 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2994799' 00:10:36.428 killing process with pid 2994799 00:10:36.428 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2994799 00:10:36.428 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.428 00:10:36.428 Latency(us) 00:10:36.428 [2024-11-06T14:15:04.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.428 [2024-11-06T14:15:04.063Z] =================================================================================================================== 00:10:36.428 [2024-11-06T14:15:04.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.428 15:15:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2994799 00:10:37.365 15:15:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:37.365 15:15:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.624 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:37.624 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:37.883 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:37.883 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:37.883 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:38.142 [2024-11-06 15:15:05.555009] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:38.142 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:38.402 request: 00:10:38.402 { 00:10:38.402 "uuid": "a713e3cf-497b-4ed2-b929-1c84d52d05b8", 00:10:38.402 "method": "bdev_lvol_get_lvstores", 00:10:38.402 "req_id": 1 00:10:38.402 } 00:10:38.402 Got JSON-RPC error response 00:10:38.402 response: 00:10:38.402 { 00:10:38.402 "code": -19, 00:10:38.402 "message": "No such device" 00:10:38.402 } 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:38.402 aio_bdev 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2884a858-a2b2-4246-8493-fb372c122720 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=2884a858-a2b2-4246-8493-fb372c122720 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.402 15:15:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:38.662 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2884a858-a2b2-4246-8493-fb372c122720 -t 2000 00:10:38.921 [ 00:10:38.921 { 00:10:38.921 "name": "2884a858-a2b2-4246-8493-fb372c122720", 00:10:38.921 "aliases": [ 00:10:38.921 "lvs/lvol" 00:10:38.921 ], 00:10:38.921 "product_name": "Logical Volume", 00:10:38.921 "block_size": 4096, 00:10:38.921 "num_blocks": 38912, 00:10:38.921 "uuid": "2884a858-a2b2-4246-8493-fb372c122720", 00:10:38.921 "assigned_rate_limits": { 00:10:38.921 "rw_ios_per_sec": 0, 00:10:38.921 "rw_mbytes_per_sec": 0, 00:10:38.921 "r_mbytes_per_sec": 0, 00:10:38.921 "w_mbytes_per_sec": 0 00:10:38.921 }, 00:10:38.921 "claimed": false, 00:10:38.921 "zoned": false, 00:10:38.921 "supported_io_types": { 00:10:38.921 "read": true, 00:10:38.921 "write": true, 00:10:38.921 "unmap": true, 00:10:38.921 "flush": false, 00:10:38.921 "reset": true, 00:10:38.921 "nvme_admin": false, 00:10:38.921 "nvme_io": false, 00:10:38.921 "nvme_io_md": false, 00:10:38.921 "write_zeroes": true, 00:10:38.921 "zcopy": false, 00:10:38.921 "get_zone_info": false, 00:10:38.921 "zone_management": false, 00:10:38.921 "zone_append": false, 00:10:38.921 "compare": false, 00:10:38.921 "compare_and_write": false, 00:10:38.921 "abort": false, 00:10:38.921 "seek_hole": true, 00:10:38.921 "seek_data": true, 00:10:38.921 "copy": false, 00:10:38.921 "nvme_iov_md": false 00:10:38.921 }, 00:10:38.921 "driver_specific": { 00:10:38.921 "lvol": { 00:10:38.921 "lvol_store_uuid": "a713e3cf-497b-4ed2-b929-1c84d52d05b8", 00:10:38.921 "base_bdev": "aio_bdev", 00:10:38.921 "thin_provision": false, 00:10:38.921 "num_allocated_clusters": 38, 00:10:38.921 "snapshot": false, 00:10:38.921 "clone": false, 00:10:38.921 "esnap_clone": false 00:10:38.921 } 00:10:38.921 } 00:10:38.921 } 00:10:38.921 ] 00:10:38.921 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:10:38.921 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:38.921 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:39.180 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:39.180 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:39.180 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:39.180 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:39.180 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2884a858-a2b2-4246-8493-fb372c122720 00:10:39.439 15:15:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a713e3cf-497b-4ed2-b929-1c84d52d05b8 00:10:39.698 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:39.957 00:10:39.957 real 0m17.128s 00:10:39.957 user 0m16.946s 00:10:39.957 sys 0m1.378s 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 ************************************ 00:10:39.957 END TEST lvs_grow_clean 00:10:39.957 ************************************ 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 ************************************ 00:10:39.957 START TEST lvs_grow_dirty 00:10:39.957 ************************************ 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:39.957 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:40.217 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:40.217 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:40.476 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:40.476 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:40.476 15:15:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f779387d-d5a7-496b-8ece-067aee6b3c63 lvol 150 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=27a460ff-4051-4085-8c88-c06640f97c2a 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:40.735 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:40.994 [2024-11-06 15:15:08.495740] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:40.994 [2024-11-06 15:15:08.495820] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:40.994 true 00:10:40.994 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:40.994 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:41.254 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:41.254 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:41.513 15:15:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27a460ff-4051-4085-8c88-c06640f97c2a 00:10:41.513 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:41.772 [2024-11-06 15:15:09.282367] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:41.772 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2997691 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2997691 /var/tmp/bdevperf.sock 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2997691 ']' 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:42.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.032 15:15:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:42.032 [2024-11-06 15:15:09.573950] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:42.032 [2024-11-06 15:15:09.574059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997691 ] 00:10:42.291 [2024-11-06 15:15:09.707035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.291 [2024-11-06 15:15:09.821882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.859 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.859 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:42.859 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:43.117 Nvme0n1 00:10:43.117 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:43.376 [ 00:10:43.376 { 00:10:43.376 "name": "Nvme0n1", 00:10:43.376 "aliases": [ 00:10:43.376 "27a460ff-4051-4085-8c88-c06640f97c2a" 00:10:43.376 ], 00:10:43.376 "product_name": "NVMe disk", 00:10:43.376 "block_size": 4096, 00:10:43.376 "num_blocks": 38912, 00:10:43.376 "uuid": "27a460ff-4051-4085-8c88-c06640f97c2a", 00:10:43.376 "numa_id": 0, 00:10:43.376 "assigned_rate_limits": { 00:10:43.376 "rw_ios_per_sec": 0, 00:10:43.376 "rw_mbytes_per_sec": 0, 00:10:43.376 "r_mbytes_per_sec": 0, 00:10:43.376 "w_mbytes_per_sec": 0 00:10:43.376 }, 00:10:43.376 "claimed": false, 00:10:43.376 "zoned": false, 00:10:43.376 "supported_io_types": { 00:10:43.376 "read": true, 00:10:43.376 "write": true, 00:10:43.376 "unmap": true, 00:10:43.376 "flush": true, 00:10:43.376 "reset": true, 00:10:43.376 "nvme_admin": true, 00:10:43.376 "nvme_io": true, 00:10:43.376 "nvme_io_md": false, 00:10:43.376 "write_zeroes": true, 00:10:43.376 "zcopy": false, 00:10:43.376 "get_zone_info": false, 00:10:43.376 "zone_management": false, 00:10:43.376 "zone_append": false, 00:10:43.376 "compare": true, 00:10:43.376 "compare_and_write": true, 00:10:43.376 "abort": true, 00:10:43.376 "seek_hole": false, 00:10:43.376 "seek_data": false, 00:10:43.376 "copy": true, 00:10:43.376 "nvme_iov_md": false 00:10:43.376 }, 00:10:43.376 "memory_domains": [ 00:10:43.376 { 00:10:43.376 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:43.376 "dma_device_type": 0 00:10:43.376 } 00:10:43.376 ], 00:10:43.376 "driver_specific": { 00:10:43.376 "nvme": [ 00:10:43.376 { 00:10:43.376 "trid": { 00:10:43.376 "trtype": "RDMA", 00:10:43.376 "adrfam": "IPv4", 00:10:43.376 "traddr": "192.168.100.8", 00:10:43.376 "trsvcid": "4420", 00:10:43.376 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:43.376 }, 00:10:43.376 "ctrlr_data": { 00:10:43.376 "cntlid": 1, 00:10:43.376 "vendor_id": "0x8086", 00:10:43.376 "model_number": "SPDK bdev Controller", 00:10:43.376 "serial_number": "SPDK0", 00:10:43.376 "firmware_revision": "25.01", 00:10:43.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:43.376 "oacs": { 00:10:43.376 "security": 0, 00:10:43.376 "format": 0, 00:10:43.376 "firmware": 0, 00:10:43.376 "ns_manage": 0 00:10:43.376 }, 00:10:43.376 "multi_ctrlr": true, 00:10:43.376 "ana_reporting": false 00:10:43.376 }, 00:10:43.376 "vs": { 00:10:43.376 "nvme_version": "1.3" 00:10:43.376 }, 00:10:43.376 "ns_data": { 00:10:43.377 "id": 1, 00:10:43.377 "can_share": true 00:10:43.377 } 00:10:43.377 } 00:10:43.377 ], 00:10:43.377 "mp_policy": "active_passive" 00:10:43.377 } 00:10:43.377 } 00:10:43.377 ] 00:10:43.377 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2997890 00:10:43.377 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:43.377 15:15:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:43.377 Running I/O for 10 seconds... 00:10:44.752 Latency(us) 00:10:44.752 [2024-11-06T14:15:12.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.752 Nvme0n1 : 1.00 29088.00 113.62 0.00 0.00 0.00 0.00 0.00 00:10:44.752 [2024-11-06T14:15:12.387Z] =================================================================================================================== 00:10:44.752 [2024-11-06T14:15:12.387Z] Total : 29088.00 113.62 0.00 0.00 0.00 0.00 0.00 00:10:44.752 00:10:45.321 15:15:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:45.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.580 Nvme0n1 : 2.00 29394.50 114.82 0.00 0.00 0.00 0.00 0.00 00:10:45.580 [2024-11-06T14:15:13.215Z] =================================================================================================================== 00:10:45.580 [2024-11-06T14:15:13.215Z] Total : 29394.50 114.82 0.00 0.00 0.00 0.00 0.00 00:10:45.580 00:10:45.580 true 00:10:45.580 15:15:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:45.580 15:15:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:45.839 15:15:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:45.839 15:15:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:45.839 15:15:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2997890 00:10:46.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.407 Nvme0n1 : 3.00 29493.67 115.21 0.00 0.00 0.00 0.00 0.00 00:10:46.407 [2024-11-06T14:15:14.042Z] =================================================================================================================== 00:10:46.407 [2024-11-06T14:15:14.042Z] Total : 29493.67 115.21 0.00 0.00 0.00 0.00 0.00 00:10:46.407 00:10:47.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.787 Nvme0n1 : 4.00 29584.75 115.57 0.00 0.00 0.00 0.00 0.00 00:10:47.787 [2024-11-06T14:15:15.422Z] =================================================================================================================== 00:10:47.787 [2024-11-06T14:15:15.422Z] Total : 29584.75 115.57 0.00 0.00 0.00 0.00 0.00 00:10:47.787 00:10:48.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.725 Nvme0n1 : 5.00 29658.20 115.85 0.00 0.00 0.00 0.00 0.00 00:10:48.725 [2024-11-06T14:15:16.360Z] =================================================================================================================== 00:10:48.725 [2024-11-06T14:15:16.360Z] Total : 29658.20 115.85 0.00 0.00 0.00 0.00 0.00 00:10:48.725 00:10:49.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.660 Nvme0n1 : 6.00 29711.83 116.06 0.00 0.00 0.00 0.00 0.00 00:10:49.660 [2024-11-06T14:15:17.295Z] =================================================================================================================== 00:10:49.660 [2024-11-06T14:15:17.295Z] Total : 29711.83 116.06 0.00 0.00 0.00 0.00 0.00 00:10:49.660 00:10:50.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.599 Nvme0n1 : 7.00 29755.29 116.23 0.00 0.00 0.00 0.00 0.00 00:10:50.599 [2024-11-06T14:15:18.234Z] =================================================================================================================== 00:10:50.599 [2024-11-06T14:15:18.234Z] Total : 29755.29 116.23 0.00 0.00 0.00 0.00 0.00 00:10:50.599 00:10:51.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.538 Nvme0n1 : 8.00 29792.25 116.38 0.00 0.00 0.00 0.00 0.00 00:10:51.538 [2024-11-06T14:15:19.173Z] =================================================================================================================== 00:10:51.538 [2024-11-06T14:15:19.173Z] Total : 29792.25 116.38 0.00 0.00 0.00 0.00 0.00 00:10:51.538 00:10:52.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.476 Nvme0n1 : 9.00 29816.78 116.47 0.00 0.00 0.00 0.00 0.00 00:10:52.476 [2024-11-06T14:15:20.111Z] =================================================================================================================== 00:10:52.476 [2024-11-06T14:15:20.111Z] Total : 29816.78 116.47 0.00 0.00 0.00 0.00 0.00 00:10:52.476 00:10:53.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.413 Nvme0n1 : 10.00 29741.00 116.18 0.00 0.00 0.00 0.00 0.00 00:10:53.413 [2024-11-06T14:15:21.048Z] =================================================================================================================== 00:10:53.413 [2024-11-06T14:15:21.048Z] Total : 29741.00 116.18 0.00 0.00 0.00 0.00 0.00 00:10:53.413 00:10:53.413 00:10:53.413 Latency(us) 00:10:53.413 [2024-11-06T14:15:21.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.413 Nvme0n1 : 10.00 29742.00 116.18 0.00 0.00 4300.38 3191.32 17438.27 00:10:53.413 [2024-11-06T14:15:21.048Z] =================================================================================================================== 00:10:53.413 [2024-11-06T14:15:21.048Z] Total : 29742.00 116.18 0.00 0.00 4300.38 3191.32 17438.27 00:10:53.413 { 00:10:53.413 "results": [ 00:10:53.413 { 00:10:53.413 "job": "Nvme0n1", 00:10:53.413 "core_mask": "0x2", 00:10:53.413 "workload": "randwrite", 00:10:53.413 "status": "finished", 00:10:53.413 "queue_depth": 128, 00:10:53.413 "io_size": 4096, 00:10:53.413 "runtime": 10.003967, 00:10:53.413 "iops": 29742.001348065223, 00:10:53.413 "mibps": 116.17969276587978, 00:10:53.413 "io_failed": 0, 00:10:53.413 "io_timeout": 0, 00:10:53.413 "avg_latency_us": 4300.379289631109, 00:10:53.413 "min_latency_us": 3191.318260869565, 00:10:53.413 "max_latency_us": 17438.274782608696 00:10:53.413 } 00:10:53.413 ], 00:10:53.413 "core_count": 1 00:10:53.413 } 00:10:53.413 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2997691 00:10:53.413 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2997691 ']' 00:10:53.413 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2997691 00:10:53.413 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:10:53.413 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2997691 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2997691' 00:10:53.673 killing process with pid 2997691 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2997691 00:10:53.673 Received shutdown signal, test time was about 10.000000 seconds 00:10:53.673 00:10:53.673 Latency(us) 00:10:53.673 [2024-11-06T14:15:21.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.673 [2024-11-06T14:15:21.308Z] =================================================================================================================== 00:10:53.673 [2024-11-06T14:15:21.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:53.673 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2997691 00:10:54.611 15:15:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:54.611 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:54.870 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:54.870 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:55.129 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:55.129 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:55.129 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2994375 00:10:55.129 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2994375 00:10:55.129 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2994375 Killed "${NVMF_APP[@]}" "$@" 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2999378 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2999378 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2999378 ']' 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.130 15:15:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:55.130 [2024-11-06 15:15:22.757989] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:55.130 [2024-11-06 15:15:22.758090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.389 [2024-11-06 15:15:22.896010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.389 [2024-11-06 15:15:22.998818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.389 [2024-11-06 15:15:22.998878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.389 [2024-11-06 15:15:22.998909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.389 [2024-11-06 15:15:22.998924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.389 [2024-11-06 15:15:22.998935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.389 [2024-11-06 15:15:23.000331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:55.958 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:56.218 [2024-11-06 15:15:23.768988] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:56.218 [2024-11-06 15:15:23.769166] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:56.218 [2024-11-06 15:15:23.769207] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 27a460ff-4051-4085-8c88-c06640f97c2a 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=27a460ff-4051-4085-8c88-c06640f97c2a 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:56.218 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:56.479 15:15:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 27a460ff-4051-4085-8c88-c06640f97c2a -t 2000 00:10:56.738 [ 00:10:56.738 { 00:10:56.738 "name": "27a460ff-4051-4085-8c88-c06640f97c2a", 00:10:56.738 "aliases": [ 00:10:56.738 "lvs/lvol" 00:10:56.738 ], 00:10:56.739 "product_name": "Logical Volume", 00:10:56.739 "block_size": 4096, 00:10:56.739 "num_blocks": 38912, 00:10:56.739 "uuid": "27a460ff-4051-4085-8c88-c06640f97c2a", 00:10:56.739 "assigned_rate_limits": { 00:10:56.739 "rw_ios_per_sec": 0, 00:10:56.739 "rw_mbytes_per_sec": 0, 00:10:56.739 "r_mbytes_per_sec": 0, 00:10:56.739 "w_mbytes_per_sec": 0 00:10:56.739 }, 00:10:56.739 "claimed": false, 00:10:56.739 "zoned": false, 00:10:56.739 "supported_io_types": { 00:10:56.739 "read": true, 00:10:56.739 "write": true, 00:10:56.739 "unmap": true, 00:10:56.739 "flush": false, 00:10:56.739 "reset": true, 00:10:56.739 "nvme_admin": false, 00:10:56.739 "nvme_io": false, 00:10:56.739 "nvme_io_md": false, 00:10:56.739 "write_zeroes": true, 00:10:56.739 "zcopy": false, 00:10:56.739 "get_zone_info": false, 00:10:56.739 "zone_management": false, 00:10:56.739 "zone_append": false, 00:10:56.739 "compare": false, 00:10:56.739 "compare_and_write": false, 00:10:56.739 "abort": false, 00:10:56.739 "seek_hole": true, 00:10:56.739 "seek_data": true, 00:10:56.739 "copy": false, 00:10:56.739 "nvme_iov_md": false 00:10:56.739 }, 00:10:56.739 "driver_specific": { 00:10:56.739 "lvol": { 00:10:56.739 "lvol_store_uuid": "f779387d-d5a7-496b-8ece-067aee6b3c63", 00:10:56.739 "base_bdev": "aio_bdev", 00:10:56.739 "thin_provision": false, 00:10:56.739 "num_allocated_clusters": 38, 00:10:56.739 "snapshot": false, 00:10:56.739 "clone": false, 00:10:56.739 "esnap_clone": false 00:10:56.739 } 00:10:56.739 } 00:10:56.739 } 00:10:56.739 ] 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:56.739 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:56.997 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:56.997 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:57.256 [2024-11-06 15:15:24.765430] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:57.256 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:57.257 15:15:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:57.515 request: 00:10:57.515 { 00:10:57.515 "uuid": "f779387d-d5a7-496b-8ece-067aee6b3c63", 00:10:57.515 "method": "bdev_lvol_get_lvstores", 00:10:57.515 "req_id": 1 00:10:57.515 } 00:10:57.515 Got JSON-RPC error response 00:10:57.515 response: 00:10:57.515 { 00:10:57.515 "code": -19, 00:10:57.515 "message": "No such device" 00:10:57.515 } 00:10:57.515 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:57.515 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:57.515 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:57.515 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:57.515 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:57.774 aio_bdev 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27a460ff-4051-4085-8c88-c06640f97c2a 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=27a460ff-4051-4085-8c88-c06640f97c2a 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:57.774 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:58.033 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 27a460ff-4051-4085-8c88-c06640f97c2a -t 2000 00:10:58.033 [ 00:10:58.033 { 00:10:58.033 "name": "27a460ff-4051-4085-8c88-c06640f97c2a", 00:10:58.033 "aliases": [ 00:10:58.033 "lvs/lvol" 00:10:58.033 ], 00:10:58.033 "product_name": "Logical Volume", 00:10:58.033 "block_size": 4096, 00:10:58.033 "num_blocks": 38912, 00:10:58.033 "uuid": "27a460ff-4051-4085-8c88-c06640f97c2a", 00:10:58.033 "assigned_rate_limits": { 00:10:58.033 "rw_ios_per_sec": 0, 00:10:58.033 "rw_mbytes_per_sec": 0, 00:10:58.033 "r_mbytes_per_sec": 0, 00:10:58.033 "w_mbytes_per_sec": 0 00:10:58.033 }, 00:10:58.033 "claimed": false, 00:10:58.033 "zoned": false, 00:10:58.033 "supported_io_types": { 00:10:58.033 "read": true, 00:10:58.033 "write": true, 00:10:58.033 "unmap": true, 00:10:58.033 "flush": false, 00:10:58.033 "reset": true, 00:10:58.033 "nvme_admin": false, 00:10:58.033 "nvme_io": false, 00:10:58.033 "nvme_io_md": false, 00:10:58.033 "write_zeroes": true, 00:10:58.033 "zcopy": false, 00:10:58.033 "get_zone_info": false, 00:10:58.033 "zone_management": false, 00:10:58.033 "zone_append": false, 00:10:58.033 "compare": false, 00:10:58.033 "compare_and_write": false, 00:10:58.033 "abort": false, 00:10:58.033 "seek_hole": true, 00:10:58.033 "seek_data": true, 00:10:58.033 "copy": false, 00:10:58.033 "nvme_iov_md": false 00:10:58.033 }, 00:10:58.033 "driver_specific": { 00:10:58.033 "lvol": { 00:10:58.033 "lvol_store_uuid": "f779387d-d5a7-496b-8ece-067aee6b3c63", 00:10:58.033 "base_bdev": "aio_bdev", 00:10:58.033 "thin_provision": false, 00:10:58.033 "num_allocated_clusters": 38, 00:10:58.033 "snapshot": false, 00:10:58.033 "clone": false, 00:10:58.033 "esnap_clone": false 00:10:58.033 } 00:10:58.033 } 00:10:58.033 } 00:10:58.033 ] 00:10:58.033 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:10:58.033 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:58.033 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:58.292 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:58.292 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:58.292 15:15:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:58.551 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:58.551 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27a460ff-4051-4085-8c88-c06640f97c2a 00:10:58.810 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f779387d-d5a7-496b-8ece-067aee6b3c63 00:10:58.810 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:59.069 00:10:59.069 real 0m19.164s 00:10:59.069 user 0m49.495s 00:10:59.069 sys 0m3.736s 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:59.069 ************************************ 00:10:59.069 END TEST lvs_grow_dirty 00:10:59.069 ************************************ 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:10:59.069 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:59.328 nvmf_trace.0 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:59.328 rmmod nvme_rdma 00:10:59.328 rmmod nvme_fabrics 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2999378 ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2999378 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2999378 ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2999378 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2999378 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2999378' 00:10:59.328 killing process with pid 2999378 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2999378 00:10:59.328 15:15:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2999378 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:00.706 00:11:00.706 real 0m46.116s 00:11:00.706 user 1m13.973s 00:11:00.706 sys 0m11.126s 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:00.706 ************************************ 00:11:00.706 END TEST nvmf_lvs_grow 00:11:00.706 ************************************ 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.706 15:15:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.706 ************************************ 00:11:00.706 START TEST nvmf_bdev_io_wait 00:11:00.706 ************************************ 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:00.706 * Looking for test storage... 00:11:00.706 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.706 --rc genhtml_branch_coverage=1 00:11:00.706 --rc genhtml_function_coverage=1 00:11:00.706 --rc genhtml_legend=1 00:11:00.706 --rc geninfo_all_blocks=1 00:11:00.706 --rc geninfo_unexecuted_blocks=1 00:11:00.706 00:11:00.706 ' 00:11:00.706 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:00.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.706 --rc genhtml_branch_coverage=1 00:11:00.706 --rc genhtml_function_coverage=1 00:11:00.706 --rc genhtml_legend=1 00:11:00.706 --rc geninfo_all_blocks=1 00:11:00.706 --rc geninfo_unexecuted_blocks=1 00:11:00.706 00:11:00.706 ' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:00.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.707 --rc genhtml_branch_coverage=1 00:11:00.707 --rc genhtml_function_coverage=1 00:11:00.707 --rc genhtml_legend=1 00:11:00.707 --rc geninfo_all_blocks=1 00:11:00.707 --rc geninfo_unexecuted_blocks=1 00:11:00.707 00:11:00.707 ' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:00.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.707 --rc genhtml_branch_coverage=1 00:11:00.707 --rc genhtml_function_coverage=1 00:11:00.707 --rc genhtml_legend=1 00:11:00.707 --rc geninfo_all_blocks=1 00:11:00.707 --rc geninfo_unexecuted_blocks=1 00:11:00.707 00:11:00.707 ' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.707 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.707 15:15:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.832 15:15:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:08.832 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:08.832 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:08.832 Found net devices under 0000:18:00.0: mlx_0_0 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:08.832 Found net devices under 0000:18:00.1: mlx_0_1 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.832 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:08.833 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.833 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:08.833 altname enp24s0f0np0 00:11:08.833 altname ens785f0np0 00:11:08.833 inet 192.168.100.8/24 scope global mlx_0_0 00:11:08.833 valid_lft forever preferred_lft forever 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:08.833 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.833 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:08.833 altname enp24s0f1np1 00:11:08.833 altname ens785f1np1 00:11:08.833 inet 192.168.100.9/24 scope global mlx_0_1 00:11:08.833 valid_lft forever preferred_lft forever 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:08.833 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.834 192.168.100.9' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:08.834 192.168.100.9' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:08.834 192.168.100.9' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3003080 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3003080 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3003080 ']' 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.834 15:15:35 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.834 [2024-11-06 15:15:35.364067] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:08.834 [2024-11-06 15:15:35.364209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.834 [2024-11-06 15:15:35.516209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.834 [2024-11-06 15:15:35.631196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.834 [2024-11-06 15:15:35.631242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.834 [2024-11-06 15:15:35.631255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.834 [2024-11-06 15:15:35.631269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.834 [2024-11-06 15:15:35.631280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.834 [2024-11-06 15:15:35.633488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.834 [2024-11-06 15:15:35.633578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.834 [2024-11-06 15:15:35.633643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.834 [2024-11-06 15:15:35.633676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.834 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.094 [2024-11-06 15:15:36.473907] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f6824776940) succeed. 00:11:09.094 [2024-11-06 15:15:36.483136] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f6824732940) succeed. 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 Malloc0 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 [2024-11-06 15:15:36.870505] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3003287 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3003289 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.354 { 00:11:09.354 "params": { 00:11:09.354 "name": "Nvme$subsystem", 00:11:09.354 "trtype": "$TEST_TRANSPORT", 00:11:09.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.354 "adrfam": "ipv4", 00:11:09.354 "trsvcid": "$NVMF_PORT", 00:11:09.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.354 "hdgst": ${hdgst:-false}, 00:11:09.354 "ddgst": ${ddgst:-false} 00:11:09.354 }, 00:11:09.354 "method": "bdev_nvme_attach_controller" 00:11:09.354 } 00:11:09.354 EOF 00:11:09.354 )") 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3003291 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:09.354 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.354 { 00:11:09.354 "params": { 00:11:09.354 "name": "Nvme$subsystem", 00:11:09.354 "trtype": "$TEST_TRANSPORT", 00:11:09.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.354 "adrfam": "ipv4", 00:11:09.354 "trsvcid": "$NVMF_PORT", 00:11:09.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.354 "hdgst": ${hdgst:-false}, 00:11:09.354 "ddgst": ${ddgst:-false} 00:11:09.354 }, 00:11:09.354 "method": "bdev_nvme_attach_controller" 00:11:09.354 } 00:11:09.355 EOF 00:11:09.355 )") 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.355 { 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme$subsystem", 00:11:09.355 "trtype": "$TEST_TRANSPORT", 00:11:09.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "$NVMF_PORT", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.355 "hdgst": ${hdgst:-false}, 00:11:09.355 "ddgst": ${ddgst:-false} 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 } 00:11:09.355 EOF 00:11:09.355 )") 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3003294 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.355 { 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme$subsystem", 00:11:09.355 "trtype": "$TEST_TRANSPORT", 00:11:09.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "$NVMF_PORT", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.355 "hdgst": ${hdgst:-false}, 00:11:09.355 "ddgst": ${ddgst:-false} 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 } 00:11:09.355 EOF 00:11:09.355 )") 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3003287 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme1", 00:11:09.355 "trtype": "rdma", 00:11:09.355 "traddr": "192.168.100.8", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "4420", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.355 "hdgst": false, 00:11:09.355 "ddgst": false 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 }' 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme1", 00:11:09.355 "trtype": "rdma", 00:11:09.355 "traddr": "192.168.100.8", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "4420", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.355 "hdgst": false, 00:11:09.355 "ddgst": false 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 }' 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme1", 00:11:09.355 "trtype": "rdma", 00:11:09.355 "traddr": "192.168.100.8", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "4420", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.355 "hdgst": false, 00:11:09.355 "ddgst": false 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 }' 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:09.355 15:15:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.355 "params": { 00:11:09.355 "name": "Nvme1", 00:11:09.355 "trtype": "rdma", 00:11:09.355 "traddr": "192.168.100.8", 00:11:09.355 "adrfam": "ipv4", 00:11:09.355 "trsvcid": "4420", 00:11:09.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.355 "hdgst": false, 00:11:09.355 "ddgst": false 00:11:09.355 }, 00:11:09.355 "method": "bdev_nvme_attach_controller" 00:11:09.355 }' 00:11:09.355 [2024-11-06 15:15:36.964609] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:09.355 [2024-11-06 15:15:36.964615] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:09.355 [2024-11-06 15:15:36.964612] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:09.355 [2024-11-06 15:15:36.964715] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 15:15:36.964715] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 15:15:36.964716] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:09.355 --proc-type=auto ] 00:11:09.355 --proc-type=auto ] 00:11:09.355 [2024-11-06 15:15:36.966551] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:09.355 [2024-11-06 15:15:36.966648] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:09.614 [2024-11-06 15:15:37.228822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.874 [2024-11-06 15:15:37.338480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.874 [2024-11-06 15:15:37.338928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:09.874 [2024-11-06 15:15:37.442081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.874 [2024-11-06 15:15:37.451053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.874 [2024-11-06 15:15:37.500530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.133 [2024-11-06 15:15:37.567421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:10.133 [2024-11-06 15:15:37.604279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:10.133 Running I/O for 1 seconds... 00:11:10.392 Running I/O for 1 seconds... 00:11:10.392 Running I/O for 1 seconds... 00:11:10.392 Running I/O for 1 seconds... 00:11:11.531 16192.00 IOPS, 63.25 MiB/s 00:11:11.531 Latency(us) 00:11:11.531 [2024-11-06T14:15:39.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.531 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:11.531 Nvme1n1 : 1.01 16221.38 63.36 0.00 0.00 7863.50 5328.36 21541.40 00:11:11.531 [2024-11-06T14:15:39.166Z] =================================================================================================================== 00:11:11.531 [2024-11-06T14:15:39.166Z] Total : 16221.38 63.36 0.00 0.00 7863.50 5328.36 21541.40 00:11:11.531 226008.00 IOPS, 882.84 MiB/s 00:11:11.531 Latency(us) 00:11:11.531 [2024-11-06T14:15:39.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.531 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:11.531 Nvme1n1 : 1.00 225638.06 881.40 0.00 0.00 564.43 256.45 2635.69 00:11:11.531 [2024-11-06T14:15:39.166Z] =================================================================================================================== 00:11:11.531 [2024-11-06T14:15:39.166Z] Total : 225638.06 881.40 0.00 0.00 564.43 256.45 2635.69 00:11:11.531 13673.00 IOPS, 53.41 MiB/s 00:11:11.531 Latency(us) 00:11:11.531 [2024-11-06T14:15:39.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.531 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:11.531 Nvme1n1 : 1.01 13721.31 53.60 0.00 0.00 9295.34 5328.36 16640.45 00:11:11.531 [2024-11-06T14:15:39.166Z] =================================================================================================================== 00:11:11.531 [2024-11-06T14:15:39.166Z] Total : 13721.31 53.60 0.00 0.00 9295.34 5328.36 16640.45 00:11:11.531 15934.00 IOPS, 62.24 MiB/s 00:11:11.531 Latency(us) 00:11:11.531 [2024-11-06T14:15:39.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.532 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:11.532 Nvme1n1 : 1.01 16014.30 62.56 0.00 0.00 7971.54 3519.00 17552.25 00:11:11.532 [2024-11-06T14:15:39.167Z] =================================================================================================================== 00:11:11.532 [2024-11-06T14:15:39.167Z] Total : 16014.30 62.56 0.00 0.00 7971.54 3519.00 17552.25 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3003289 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3003291 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3003294 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:12.100 rmmod nvme_rdma 00:11:12.100 rmmod nvme_fabrics 00:11:12.100 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3003080 ']' 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3003080 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3003080 ']' 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3003080 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3003080 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3003080' 00:11:12.360 killing process with pid 3003080 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3003080 00:11:12.360 15:15:39 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3003080 00:11:13.851 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.851 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:13.851 00:11:13.851 real 0m13.462s 00:11:13.851 user 0m32.057s 00:11:13.851 sys 0m7.433s 00:11:13.851 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.851 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:13.851 ************************************ 00:11:13.851 END TEST nvmf_bdev_io_wait 00:11:13.851 ************************************ 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.111 ************************************ 00:11:14.111 START TEST nvmf_queue_depth 00:11:14.111 ************************************ 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:14.111 * Looking for test storage... 00:11:14.111 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.111 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:14.112 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.371 --rc genhtml_branch_coverage=1 00:11:14.371 --rc genhtml_function_coverage=1 00:11:14.371 --rc genhtml_legend=1 00:11:14.371 --rc geninfo_all_blocks=1 00:11:14.371 --rc geninfo_unexecuted_blocks=1 00:11:14.371 00:11:14.371 ' 00:11:14.371 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.371 --rc genhtml_branch_coverage=1 00:11:14.371 --rc genhtml_function_coverage=1 00:11:14.371 --rc genhtml_legend=1 00:11:14.371 --rc geninfo_all_blocks=1 00:11:14.371 --rc geninfo_unexecuted_blocks=1 00:11:14.371 00:11:14.371 ' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.372 --rc genhtml_branch_coverage=1 00:11:14.372 --rc genhtml_function_coverage=1 00:11:14.372 --rc genhtml_legend=1 00:11:14.372 --rc geninfo_all_blocks=1 00:11:14.372 --rc geninfo_unexecuted_blocks=1 00:11:14.372 00:11:14.372 ' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.372 --rc genhtml_branch_coverage=1 00:11:14.372 --rc genhtml_function_coverage=1 00:11:14.372 --rc genhtml_legend=1 00:11:14.372 --rc geninfo_all_blocks=1 00:11:14.372 --rc geninfo_unexecuted_blocks=1 00:11:14.372 00:11:14.372 ' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.372 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.372 15:15:41 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.940 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:20.941 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:20.941 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:20.941 Found net devices under 0000:18:00.0: mlx_0_0 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:20.941 Found net devices under 0000:18:00.1: mlx_0_1 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.941 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:21.201 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.201 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:21.201 altname enp24s0f0np0 00:11:21.201 altname ens785f0np0 00:11:21.201 inet 192.168.100.8/24 scope global mlx_0_0 00:11:21.201 valid_lft forever preferred_lft forever 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:21.201 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:21.201 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:21.201 altname enp24s0f1np1 00:11:21.201 altname ens785f1np1 00:11:21.201 inet 192.168.100.9/24 scope global mlx_0_1 00:11:21.201 valid_lft forever preferred_lft forever 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:21.201 192.168.100.9' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:21.201 192.168.100.9' 00:11:21.201 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:21.202 192.168.100.9' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3006950 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3006950 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3006950 ']' 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:21.202 15:15:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.202 [2024-11-06 15:15:48.832464] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:21.202 [2024-11-06 15:15:48.832571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.461 [2024-11-06 15:15:48.984820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.461 [2024-11-06 15:15:49.093381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.461 [2024-11-06 15:15:49.093439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.461 [2024-11-06 15:15:49.093452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.461 [2024-11-06 15:15:49.093466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.461 [2024-11-06 15:15:49.093475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.461 [2024-11-06 15:15:49.094822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.029 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:22.029 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:22.029 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.029 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.029 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.288 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.288 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.289 [2024-11-06 15:15:49.704570] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f6121bbd940) succeed. 00:11:22.289 [2024-11-06 15:15:49.713986] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f6121b79940) succeed. 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.289 Malloc0 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.289 [2024-11-06 15:15:49.903957] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3007154 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3007154 /var/tmp/bdevperf.sock 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3007154 ']' 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:22.289 15:15:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.547 [2024-11-06 15:15:49.993756] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:22.547 [2024-11-06 15:15:49.993874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007154 ] 00:11:22.547 [2024-11-06 15:15:50.146163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.806 [2024-11-06 15:15:50.256970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.373 NVMe0n1 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.373 15:15:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:23.633 Running I/O for 10 seconds... 00:11:25.506 14444.00 IOPS, 56.42 MiB/s [2024-11-06T14:15:54.077Z] 14848.00 IOPS, 58.00 MiB/s [2024-11-06T14:15:55.455Z] 15018.67 IOPS, 58.67 MiB/s [2024-11-06T14:15:56.392Z] 15104.00 IOPS, 59.00 MiB/s [2024-11-06T14:15:57.330Z] 15155.20 IOPS, 59.20 MiB/s [2024-11-06T14:15:58.267Z] 15189.33 IOPS, 59.33 MiB/s [2024-11-06T14:15:59.203Z] 15213.71 IOPS, 59.43 MiB/s [2024-11-06T14:16:00.137Z] 15232.00 IOPS, 59.50 MiB/s [2024-11-06T14:16:01.073Z] 15246.22 IOPS, 59.56 MiB/s [2024-11-06T14:16:01.331Z] 15225.70 IOPS, 59.48 MiB/s 00:11:33.696 Latency(us) 00:11:33.696 [2024-11-06T14:16:01.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.696 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:33.696 Verification LBA range: start 0x0 length 0x4000 00:11:33.696 NVMe0n1 : 10.05 15248.82 59.57 0.00 0.00 66928.93 19489.84 43082.80 00:11:33.696 [2024-11-06T14:16:01.331Z] =================================================================================================================== 00:11:33.696 [2024-11-06T14:16:01.331Z] Total : 15248.82 59.57 0.00 0.00 66928.93 19489.84 43082.80 00:11:33.696 { 00:11:33.696 "results": [ 00:11:33.696 { 00:11:33.696 "job": "NVMe0n1", 00:11:33.696 "core_mask": "0x1", 00:11:33.696 "workload": "verify", 00:11:33.696 "status": "finished", 00:11:33.696 "verify_range": { 00:11:33.696 "start": 0, 00:11:33.696 "length": 16384 00:11:33.696 }, 00:11:33.696 "queue_depth": 1024, 00:11:33.696 "io_size": 4096, 00:11:33.696 "runtime": 10.054089, 00:11:33.696 "iops": 15248.82065396477, 00:11:33.696 "mibps": 59.565705679549886, 00:11:33.696 "io_failed": 0, 00:11:33.696 "io_timeout": 0, 00:11:33.696 "avg_latency_us": 66928.93423092684, 00:11:33.696 "min_latency_us": 19489.83652173913, 00:11:33.696 "max_latency_us": 43082.79652173913 00:11:33.696 } 00:11:33.696 ], 00:11:33.696 "core_count": 1 00:11:33.696 } 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3007154 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3007154 ']' 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3007154 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.696 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3007154 00:11:33.697 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.697 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.697 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3007154' 00:11:33.697 killing process with pid 3007154 00:11:33.697 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3007154 00:11:33.697 Received shutdown signal, test time was about 10.000000 seconds 00:11:33.697 00:11:33.697 Latency(us) 00:11:33.697 [2024-11-06T14:16:01.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.697 [2024-11-06T14:16:01.332Z] =================================================================================================================== 00:11:33.697 [2024-11-06T14:16:01.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:33.697 15:16:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3007154 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:34.634 rmmod nvme_rdma 00:11:34.634 rmmod nvme_fabrics 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3006950 ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3006950 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3006950 ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3006950 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3006950 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3006950' 00:11:34.634 killing process with pid 3006950 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3006950 00:11:34.634 15:16:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3006950 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:36.014 00:11:36.014 real 0m22.029s 00:11:36.014 user 0m28.917s 00:11:36.014 sys 0m6.414s 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.014 ************************************ 00:11:36.014 END TEST nvmf_queue_depth 00:11:36.014 ************************************ 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.014 15:16:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:36.274 ************************************ 00:11:36.274 START TEST nvmf_target_multipath 00:11:36.274 ************************************ 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:36.274 * Looking for test storage... 00:11:36.274 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.274 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:36.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.274 --rc genhtml_branch_coverage=1 00:11:36.275 --rc genhtml_function_coverage=1 00:11:36.275 --rc genhtml_legend=1 00:11:36.275 --rc geninfo_all_blocks=1 00:11:36.275 --rc geninfo_unexecuted_blocks=1 00:11:36.275 00:11:36.275 ' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.275 --rc genhtml_branch_coverage=1 00:11:36.275 --rc genhtml_function_coverage=1 00:11:36.275 --rc genhtml_legend=1 00:11:36.275 --rc geninfo_all_blocks=1 00:11:36.275 --rc geninfo_unexecuted_blocks=1 00:11:36.275 00:11:36.275 ' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.275 --rc genhtml_branch_coverage=1 00:11:36.275 --rc genhtml_function_coverage=1 00:11:36.275 --rc genhtml_legend=1 00:11:36.275 --rc geninfo_all_blocks=1 00:11:36.275 --rc geninfo_unexecuted_blocks=1 00:11:36.275 00:11:36.275 ' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.275 --rc genhtml_branch_coverage=1 00:11:36.275 --rc genhtml_function_coverage=1 00:11:36.275 --rc genhtml_legend=1 00:11:36.275 --rc geninfo_all_blocks=1 00:11:36.275 --rc geninfo_unexecuted_blocks=1 00:11:36.275 00:11:36.275 ' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.275 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.275 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.534 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.534 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.534 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.534 15:16:03 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:43.105 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:43.105 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:43.106 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:43.106 Found net devices under 0000:18:00.0: mlx_0_0 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:43.106 Found net devices under 0000:18:00.1: mlx_0_1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:43.106 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.106 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:43.106 altname enp24s0f0np0 00:11:43.106 altname ens785f0np0 00:11:43.106 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.106 valid_lft forever preferred_lft forever 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.106 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.366 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:43.367 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.367 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:43.367 altname enp24s0f1np1 00:11:43.367 altname ens785f1np1 00:11:43.367 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.367 valid_lft forever preferred_lft forever 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.367 192.168.100.9' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:43.367 192.168.100.9' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:43.367 192.168.100.9' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:43.367 run this test only with TCP transport for now 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:43.367 rmmod nvme_rdma 00:11:43.367 rmmod nvme_fabrics 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:43.367 00:11:43.367 real 0m7.254s 00:11:43.367 user 0m2.159s 00:11:43.367 sys 0m5.306s 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:43.367 ************************************ 00:11:43.367 END TEST nvmf_target_multipath 00:11:43.367 ************************************ 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.367 15:16:10 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.627 ************************************ 00:11:43.627 START TEST nvmf_zcopy 00:11:43.627 ************************************ 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:43.627 * Looking for test storage... 00:11:43.627 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.627 --rc genhtml_branch_coverage=1 00:11:43.627 --rc genhtml_function_coverage=1 00:11:43.627 --rc genhtml_legend=1 00:11:43.627 --rc geninfo_all_blocks=1 00:11:43.627 --rc geninfo_unexecuted_blocks=1 00:11:43.627 00:11:43.627 ' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.627 --rc genhtml_branch_coverage=1 00:11:43.627 --rc genhtml_function_coverage=1 00:11:43.627 --rc genhtml_legend=1 00:11:43.627 --rc geninfo_all_blocks=1 00:11:43.627 --rc geninfo_unexecuted_blocks=1 00:11:43.627 00:11:43.627 ' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.627 --rc genhtml_branch_coverage=1 00:11:43.627 --rc genhtml_function_coverage=1 00:11:43.627 --rc genhtml_legend=1 00:11:43.627 --rc geninfo_all_blocks=1 00:11:43.627 --rc geninfo_unexecuted_blocks=1 00:11:43.627 00:11:43.627 ' 00:11:43.627 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.627 --rc genhtml_branch_coverage=1 00:11:43.627 --rc genhtml_function_coverage=1 00:11:43.627 --rc genhtml_legend=1 00:11:43.627 --rc geninfo_all_blocks=1 00:11:43.627 --rc geninfo_unexecuted_blocks=1 00:11:43.627 00:11:43.627 ' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.628 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.628 15:16:11 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:51.754 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:51.754 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:51.754 Found net devices under 0000:18:00.0: mlx_0_0 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:51.754 Found net devices under 0000:18:00.1: mlx_0_1 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:51.754 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:51.755 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:51.755 15:16:17 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:51.755 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.755 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:51.755 altname enp24s0f0np0 00:11:51.755 altname ens785f0np0 00:11:51.755 inet 192.168.100.8/24 scope global mlx_0_0 00:11:51.755 valid_lft forever preferred_lft forever 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:51.755 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.755 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:51.755 altname enp24s0f1np1 00:11:51.755 altname ens785f1np1 00:11:51.755 inet 192.168.100.9/24 scope global mlx_0_1 00:11:51.755 valid_lft forever preferred_lft forever 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:51.755 192.168.100.9' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:51.755 192.168.100.9' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:51.755 192.168.100.9' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3014791 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3014791 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3014791 ']' 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.755 15:16:18 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.755 [2024-11-06 15:16:18.298840] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:51.756 [2024-11-06 15:16:18.298953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.756 [2024-11-06 15:16:18.453847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.756 [2024-11-06 15:16:18.557507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.756 [2024-11-06 15:16:18.557564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.756 [2024-11-06 15:16:18.557577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.756 [2024-11-06 15:16:18.557590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.756 [2024-11-06 15:16:18.557599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.756 [2024-11-06 15:16:18.558887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:51.756 Unsupported transport: rdma 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # type=--id 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@811 -- # id=0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:51.756 nvmf_trace.0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # return 0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:51.756 rmmod nvme_rdma 00:11:51.756 rmmod nvme_fabrics 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3014791 ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3014791 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3014791 ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3014791 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014791 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014791' 00:11:51.756 killing process with pid 3014791 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3014791 00:11:51.756 15:16:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3014791 00:11:53.134 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.134 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:53.134 00:11:53.135 real 0m9.337s 00:11:53.135 user 0m4.320s 00:11:53.135 sys 0m5.827s 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.135 ************************************ 00:11:53.135 END TEST nvmf_zcopy 00:11:53.135 ************************************ 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.135 ************************************ 00:11:53.135 START TEST nvmf_nmic 00:11:53.135 ************************************ 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:53.135 * Looking for test storage... 00:11:53.135 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.135 --rc genhtml_branch_coverage=1 00:11:53.135 --rc genhtml_function_coverage=1 00:11:53.135 --rc genhtml_legend=1 00:11:53.135 --rc geninfo_all_blocks=1 00:11:53.135 --rc geninfo_unexecuted_blocks=1 00:11:53.135 00:11:53.135 ' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.135 --rc genhtml_branch_coverage=1 00:11:53.135 --rc genhtml_function_coverage=1 00:11:53.135 --rc genhtml_legend=1 00:11:53.135 --rc geninfo_all_blocks=1 00:11:53.135 --rc geninfo_unexecuted_blocks=1 00:11:53.135 00:11:53.135 ' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.135 --rc genhtml_branch_coverage=1 00:11:53.135 --rc genhtml_function_coverage=1 00:11:53.135 --rc genhtml_legend=1 00:11:53.135 --rc geninfo_all_blocks=1 00:11:53.135 --rc geninfo_unexecuted_blocks=1 00:11:53.135 00:11:53.135 ' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.135 --rc genhtml_branch_coverage=1 00:11:53.135 --rc genhtml_function_coverage=1 00:11:53.135 --rc genhtml_legend=1 00:11:53.135 --rc geninfo_all_blocks=1 00:11:53.135 --rc geninfo_unexecuted_blocks=1 00:11:53.135 00:11:53.135 ' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.135 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.136 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.136 15:16:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.706 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:59.707 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:59.707 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:59.707 Found net devices under 0000:18:00.0: mlx_0_0 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:59.707 Found net devices under 0000:18:00.1: mlx_0_1 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:59.707 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:59.966 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:59.967 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:59.967 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:59.967 altname enp24s0f0np0 00:11:59.967 altname ens785f0np0 00:11:59.967 inet 192.168.100.8/24 scope global mlx_0_0 00:11:59.967 valid_lft forever preferred_lft forever 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:59.967 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:59.967 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:59.967 altname enp24s0f1np1 00:11:59.967 altname ens785f1np1 00:11:59.967 inet 192.168.100.9/24 scope global mlx_0_1 00:11:59.967 valid_lft forever preferred_lft forever 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:59.967 192.168.100.9' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:59.967 192.168.100.9' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:59.967 192.168.100.9' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3018074 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3018074 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3018074 ']' 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.967 15:16:27 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:00.226 [2024-11-06 15:16:27.676303] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:00.226 [2024-11-06 15:16:27.676409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.226 [2024-11-06 15:16:27.826552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.485 [2024-11-06 15:16:27.938211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.485 [2024-11-06 15:16:27.938274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.485 [2024-11-06 15:16:27.938303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.485 [2024-11-06 15:16:27.938318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.485 [2024-11-06 15:16:27.938328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.485 [2024-11-06 15:16:27.940730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.485 [2024-11-06 15:16:27.940819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.485 [2024-11-06 15:16:27.940880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.485 [2024-11-06 15:16:27.940907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.053 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.053 [2024-11-06 15:16:28.556374] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7ff2c4fbd940) succeed. 00:12:01.053 [2024-11-06 15:16:28.565994] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7ff2c4f79940) succeed. 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.312 Malloc0 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.312 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.571 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.571 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:01.571 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.571 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.571 [2024-11-06 15:16:28.960786] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:01.572 test case1: single bdev can't be used in multiple subsystems 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.572 [2024-11-06 15:16:28.988617] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:01.572 [2024-11-06 15:16:28.988655] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:01.572 [2024-11-06 15:16:28.988669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.572 request: 00:12:01.572 { 00:12:01.572 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:01.572 "namespace": { 00:12:01.572 "bdev_name": "Malloc0", 00:12:01.572 "no_auto_visible": false 00:12:01.572 }, 00:12:01.572 "method": "nvmf_subsystem_add_ns", 00:12:01.572 "req_id": 1 00:12:01.572 } 00:12:01.572 Got JSON-RPC error response 00:12:01.572 response: 00:12:01.572 { 00:12:01.572 "code": -32602, 00:12:01.572 "message": "Invalid parameters" 00:12:01.572 } 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:01.572 Adding namespace failed - expected result. 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:01.572 test case2: host connect to nvmf target in multiple paths 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.572 15:16:28 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:01.572 [2024-11-06 15:16:29.004700] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:12:01.572 15:16:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.572 15:16:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:02.521 15:16:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:12:03.458 15:16:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.458 15:16:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:03.458 15:16:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.458 15:16:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:03.458 15:16:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:05.990 15:16:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:05.990 [global] 00:12:05.990 thread=1 00:12:05.990 invalidate=1 00:12:05.990 rw=write 00:12:05.990 time_based=1 00:12:05.990 runtime=1 00:12:05.990 ioengine=libaio 00:12:05.990 direct=1 00:12:05.990 bs=4096 00:12:05.990 iodepth=1 00:12:05.990 norandommap=0 00:12:05.990 numjobs=1 00:12:05.990 00:12:05.990 verify_dump=1 00:12:05.990 verify_backlog=512 00:12:05.990 verify_state_save=0 00:12:05.990 do_verify=1 00:12:05.990 verify=crc32c-intel 00:12:05.990 [job0] 00:12:05.990 filename=/dev/nvme0n1 00:12:05.990 Could not set queue depth (nvme0n1) 00:12:05.990 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.990 fio-3.35 00:12:05.990 Starting 1 thread 00:12:06.928 00:12:06.928 job0: (groupid=0, jobs=1): err= 0: pid=3018947: Wed Nov 6 15:16:34 2024 00:12:06.928 read: IOPS=6238, BW=24.4MiB/s (25.6MB/s)(24.4MiB/1001msec) 00:12:06.928 slat (nsec): min=8313, max=38344, avg=8884.98, stdev=1111.99 00:12:06.928 clat (usec): min=48, max=140, avg=66.18, stdev= 4.29 00:12:06.928 lat (usec): min=64, max=149, avg=75.07, stdev= 4.45 00:12:06.928 clat percentiles (usec): 00:12:06.928 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 63], 00:12:06.928 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 68], 00:12:06.928 | 70.00th=[ 69], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 74], 00:12:06.928 | 99.00th=[ 79], 99.50th=[ 81], 99.90th=[ 88], 99.95th=[ 90], 00:12:06.928 | 99.99th=[ 141] 00:12:06.928 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:12:06.928 slat (nsec): min=8786, max=37361, avg=11655.25, stdev=1189.48 00:12:06.928 clat (usec): min=43, max=127, avg=62.99, stdev= 4.24 00:12:06.928 lat (usec): min=63, max=164, avg=74.65, stdev= 4.44 00:12:06.928 clat percentiles (usec): 00:12:06.928 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 60], 00:12:06.928 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:12:06.928 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:12:06.928 | 99.00th=[ 75], 99.50th=[ 78], 99.90th=[ 85], 99.95th=[ 88], 00:12:06.928 | 99.99th=[ 128] 00:12:06.928 bw ( KiB/s): min=26888, max=26888, per=100.00%, avg=26888.00, stdev= 0.00, samples=1 00:12:06.928 iops : min= 6722, max= 6722, avg=6722.00, stdev= 0.00, samples=1 00:12:06.928 lat (usec) : 50=0.03%, 100=99.95%, 250=0.02% 00:12:06.928 cpu : usr=8.40%, sys=13.30%, ctx=12902, majf=0, minf=1 00:12:06.928 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.929 issued rwts: total=6245,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.929 00:12:06.929 Run status group 0 (all jobs): 00:12:06.929 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=24.4MiB (25.6MB), run=1001-1001msec 00:12:06.929 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:12:06.929 00:12:06.929 Disk stats (read/write): 00:12:06.929 nvme0n1: ios=5682/5958, merge=0/0, ticks=354/339, in_queue=693, util=90.88% 00:12:06.929 15:16:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.836 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:09.096 rmmod nvme_rdma 00:12:09.096 rmmod nvme_fabrics 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3018074 ']' 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3018074 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3018074 ']' 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3018074 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3018074 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3018074' 00:12:09.096 killing process with pid 3018074 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3018074 00:12:09.096 15:16:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3018074 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:11.077 00:12:11.077 real 0m18.078s 00:12:11.077 user 0m45.060s 00:12:11.077 sys 0m6.478s 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.077 ************************************ 00:12:11.077 END TEST nvmf_nmic 00:12:11.077 ************************************ 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:11.077 ************************************ 00:12:11.077 START TEST nvmf_fio_target 00:12:11.077 ************************************ 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:12:11.077 * Looking for test storage... 00:12:11.077 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:11.077 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:11.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.336 --rc genhtml_branch_coverage=1 00:12:11.336 --rc genhtml_function_coverage=1 00:12:11.336 --rc genhtml_legend=1 00:12:11.336 --rc geninfo_all_blocks=1 00:12:11.336 --rc geninfo_unexecuted_blocks=1 00:12:11.336 00:12:11.336 ' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:11.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.336 --rc genhtml_branch_coverage=1 00:12:11.336 --rc genhtml_function_coverage=1 00:12:11.336 --rc genhtml_legend=1 00:12:11.336 --rc geninfo_all_blocks=1 00:12:11.336 --rc geninfo_unexecuted_blocks=1 00:12:11.336 00:12:11.336 ' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:11.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.336 --rc genhtml_branch_coverage=1 00:12:11.336 --rc genhtml_function_coverage=1 00:12:11.336 --rc genhtml_legend=1 00:12:11.336 --rc geninfo_all_blocks=1 00:12:11.336 --rc geninfo_unexecuted_blocks=1 00:12:11.336 00:12:11.336 ' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:11.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.336 --rc genhtml_branch_coverage=1 00:12:11.336 --rc genhtml_function_coverage=1 00:12:11.336 --rc genhtml_legend=1 00:12:11.336 --rc geninfo_all_blocks=1 00:12:11.336 --rc geninfo_unexecuted_blocks=1 00:12:11.336 00:12:11.336 ' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.336 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.337 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.337 15:16:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:17.905 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:17.905 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.905 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:17.906 Found net devices under 0000:18:00.0: mlx_0_0 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:17.906 Found net devices under 0000:18:00.1: mlx_0_1 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:17.906 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:18.165 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.165 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:18.165 altname enp24s0f0np0 00:12:18.165 altname ens785f0np0 00:12:18.165 inet 192.168.100.8/24 scope global mlx_0_0 00:12:18.165 valid_lft forever preferred_lft forever 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.165 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:18.166 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.166 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:18.166 altname enp24s0f1np1 00:12:18.166 altname ens785f1np1 00:12:18.166 inet 192.168.100.9/24 scope global mlx_0_1 00:12:18.166 valid_lft forever preferred_lft forever 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.166 192.168.100.9' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:18.166 192.168.100.9' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:18.166 192.168.100.9' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3022556 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3022556 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3022556 ']' 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.166 15:16:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.425 [2024-11-06 15:16:45.847322] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:18.425 [2024-11-06 15:16:45.847436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.425 [2024-11-06 15:16:45.995225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.683 [2024-11-06 15:16:46.102935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.683 [2024-11-06 15:16:46.102988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.683 [2024-11-06 15:16:46.103016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.683 [2024-11-06 15:16:46.103029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.683 [2024-11-06 15:16:46.103039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.683 [2024-11-06 15:16:46.105321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.683 [2024-11-06 15:16:46.105367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.683 [2024-11-06 15:16:46.105454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.683 [2024-11-06 15:16:46.105425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.252 15:16:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:19.511 [2024-11-06 15:16:46.905996] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f176b17d940) succeed. 00:12:19.511 [2024-11-06 15:16:46.915507] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f176b139940) succeed. 00:12:19.770 15:16:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.029 15:16:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:20.029 15:16:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.288 15:16:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:20.288 15:16:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.546 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:20.546 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.805 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:20.805 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:21.064 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.323 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:21.323 15:16:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.582 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:21.582 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.840 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:21.841 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:22.099 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.358 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:22.358 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.358 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:22.358 15:16:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.616 15:16:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.875 [2024-11-06 15:16:50.372567] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.875 15:16:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:23.134 15:16:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:23.393 15:16:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:24.332 15:16:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:26.236 15:16:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:26.236 [global] 00:12:26.236 thread=1 00:12:26.236 invalidate=1 00:12:26.236 rw=write 00:12:26.236 time_based=1 00:12:26.236 runtime=1 00:12:26.236 ioengine=libaio 00:12:26.236 direct=1 00:12:26.236 bs=4096 00:12:26.236 iodepth=1 00:12:26.236 norandommap=0 00:12:26.236 numjobs=1 00:12:26.236 00:12:26.236 verify_dump=1 00:12:26.236 verify_backlog=512 00:12:26.236 verify_state_save=0 00:12:26.236 do_verify=1 00:12:26.236 verify=crc32c-intel 00:12:26.236 [job0] 00:12:26.236 filename=/dev/nvme0n1 00:12:26.236 [job1] 00:12:26.236 filename=/dev/nvme0n2 00:12:26.236 [job2] 00:12:26.236 filename=/dev/nvme0n3 00:12:26.236 [job3] 00:12:26.236 filename=/dev/nvme0n4 00:12:26.500 Could not set queue depth (nvme0n1) 00:12:26.500 Could not set queue depth (nvme0n2) 00:12:26.500 Could not set queue depth (nvme0n3) 00:12:26.500 Could not set queue depth (nvme0n4) 00:12:26.758 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.758 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.758 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.758 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.758 fio-3.35 00:12:26.758 Starting 4 threads 00:12:28.131 00:12:28.131 job0: (groupid=0, jobs=1): err= 0: pid=3023868: Wed Nov 6 15:16:55 2024 00:12:28.131 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:28.131 slat (nsec): min=8437, max=37124, avg=8991.97, stdev=900.28 00:12:28.131 clat (usec): min=78, max=288, avg=97.36, stdev= 7.97 00:12:28.131 lat (usec): min=87, max=297, avg=106.36, stdev= 8.05 00:12:28.131 clat percentiles (usec): 00:12:28.131 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:12:28.131 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 99], 00:12:28.131 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 111], 00:12:28.131 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 155], 99.95th=[ 157], 00:12:28.131 | 99.99th=[ 289] 00:12:28.131 write: IOPS=4672, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1001msec); 0 zone resets 00:12:28.131 slat (nsec): min=7047, max=52113, avg=11691.34, stdev=1303.15 00:12:28.131 clat (usec): min=73, max=152, avg=91.95, stdev= 7.45 00:12:28.131 lat (usec): min=83, max=197, avg=103.64, stdev= 7.68 00:12:28.131 clat percentiles (usec): 00:12:28.131 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:12:28.131 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:12:28.131 | 70.00th=[ 95], 80.00th=[ 98], 90.00th=[ 101], 95.00th=[ 105], 00:12:28.131 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 139], 99.95th=[ 145], 00:12:28.131 | 99.99th=[ 153] 00:12:28.131 bw ( KiB/s): min=20480, max=20480, per=32.15%, avg=20480.00, stdev= 0.00, samples=1 00:12:28.131 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:28.131 lat (usec) : 100=77.86%, 250=22.13%, 500=0.01% 00:12:28.131 cpu : usr=7.00%, sys=8.80%, ctx=9286, majf=0, minf=1 00:12:28.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.131 issued rwts: total=4608,4677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.131 job1: (groupid=0, jobs=1): err= 0: pid=3023869: Wed Nov 6 15:16:55 2024 00:12:28.131 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec) 00:12:28.131 slat (nsec): min=8394, max=32337, avg=8944.11, stdev=916.43 00:12:28.131 clat (usec): min=74, max=166, avg=89.58, stdev= 7.25 00:12:28.131 lat (usec): min=82, max=175, avg=98.53, stdev= 7.37 00:12:28.131 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:12:28.132 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:12:28.132 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 103], 00:12:28.132 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 133], 99.95th=[ 147], 00:12:28.132 | 99.99th=[ 167] 00:12:28.132 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:28.132 slat (nsec): min=9168, max=35660, avg=11795.22, stdev=1255.86 00:12:28.132 clat (usec): min=68, max=138, avg=84.95, stdev= 7.17 00:12:28.132 lat (usec): min=79, max=154, avg=96.74, stdev= 7.38 00:12:28.132 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:12:28.132 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:12:28.132 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 98], 00:12:28.132 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 119], 00:12:28.132 | 99.99th=[ 139] 00:12:28.132 bw ( KiB/s): min=20512, max=20512, per=32.20%, avg=20512.00, stdev= 0.00, samples=1 00:12:28.132 iops : min= 5128, max= 5128, avg=5128.00, stdev= 0.00, samples=1 00:12:28.132 lat (usec) : 100=94.08%, 250=5.92% 00:12:28.132 cpu : usr=6.80%, sys=10.40%, ctx=9961, majf=0, minf=1 00:12:28.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 issued rwts: total=4841,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.132 job2: (groupid=0, jobs=1): err= 0: pid=3023870: Wed Nov 6 15:16:55 2024 00:12:28.132 read: IOPS=2970, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:12:28.132 slat (nsec): min=8773, max=21925, avg=9447.05, stdev=890.89 00:12:28.132 clat (usec): min=92, max=334, avg=158.02, stdev=21.29 00:12:28.132 lat (usec): min=101, max=343, avg=167.47, stdev=21.30 00:12:28.132 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 110], 5.00th=[ 123], 10.00th=[ 137], 20.00th=[ 145], 00:12:28.132 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:12:28.132 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 202], 00:12:28.132 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 289], 99.95th=[ 302], 00:12:28.132 | 99.99th=[ 334] 00:12:28.132 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:28.132 slat (nsec): min=9688, max=34074, avg=11903.02, stdev=1237.72 00:12:28.132 clat (usec): min=85, max=407, avg=146.92, stdev=22.53 00:12:28.132 lat (usec): min=97, max=419, avg=158.82, stdev=22.53 00:12:28.132 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 99], 5.00th=[ 109], 10.00th=[ 124], 20.00th=[ 135], 00:12:28.132 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:12:28.132 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 192], 00:12:28.132 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 297], 99.95th=[ 408], 00:12:28.132 | 99.99th=[ 408] 00:12:28.132 bw ( KiB/s): min=12288, max=12288, per=19.29%, avg=12288.00, stdev= 0.00, samples=1 00:12:28.132 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:28.132 lat (usec) : 100=0.86%, 250=98.91%, 500=0.23% 00:12:28.132 cpu : usr=3.30%, sys=7.10%, ctx=6045, majf=0, minf=1 00:12:28.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 issued rwts: total=2973,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.132 job3: (groupid=0, jobs=1): err= 0: pid=3023871: Wed Nov 6 15:16:55 2024 00:12:28.132 read: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec) 00:12:28.132 slat (nsec): min=9177, max=28145, avg=9963.93, stdev=862.01 00:12:28.132 clat (usec): min=87, max=261, avg=156.65, stdev=20.56 00:12:28.132 lat (usec): min=96, max=271, avg=166.61, stdev=20.54 00:12:28.132 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 108], 5.00th=[ 122], 10.00th=[ 137], 20.00th=[ 145], 00:12:28.132 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:12:28.132 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 200], 00:12:28.132 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 258], 99.95th=[ 260], 00:12:28.132 | 99.99th=[ 262] 00:12:28.132 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:28.132 slat (nsec): min=9762, max=36752, avg=12531.00, stdev=1302.39 00:12:28.132 clat (usec): min=83, max=234, avg=145.08, stdev=21.46 00:12:28.132 lat (usec): min=95, max=246, avg=157.61, stdev=21.48 00:12:28.132 clat percentiles (usec): 00:12:28.132 | 1.00th=[ 95], 5.00th=[ 105], 10.00th=[ 121], 20.00th=[ 133], 00:12:28.132 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:12:28.132 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 190], 00:12:28.132 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 221], 99.95th=[ 227], 00:12:28.132 | 99.99th=[ 235] 00:12:28.132 bw ( KiB/s): min=12288, max=12288, per=19.29%, avg=12288.00, stdev= 0.00, samples=1 00:12:28.132 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:28.132 lat (usec) : 100=1.62%, 250=98.32%, 500=0.07% 00:12:28.132 cpu : usr=4.40%, sys=6.70%, ctx=6066, majf=0, minf=1 00:12:28.132 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.132 issued rwts: total=2994,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.132 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.132 00:12:28.132 Run status group 0 (all jobs): 00:12:28.132 READ: bw=60.2MiB/s (63.1MB/s), 11.6MiB/s-18.9MiB/s (12.2MB/s-19.8MB/s), io=60.2MiB (63.1MB), run=1001-1001msec 00:12:28.132 WRITE: bw=62.2MiB/s (65.2MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=62.3MiB (65.3MB), run=1001-1001msec 00:12:28.132 00:12:28.132 Disk stats (read/write): 00:12:28.132 nvme0n1: ios=3873/4096, merge=0/0, ticks=375/351, in_queue=726, util=86.27% 00:12:28.132 nvme0n2: ios=4096/4462, merge=0/0, ticks=329/345, in_queue=674, util=86.66% 00:12:28.132 nvme0n3: ios=2521/2560, merge=0/0, ticks=397/366, in_queue=763, util=88.92% 00:12:28.132 nvme0n4: ios=2539/2560, merge=0/0, ticks=385/362, in_queue=747, util=89.67% 00:12:28.132 15:16:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:28.132 [global] 00:12:28.132 thread=1 00:12:28.132 invalidate=1 00:12:28.132 rw=randwrite 00:12:28.132 time_based=1 00:12:28.132 runtime=1 00:12:28.132 ioengine=libaio 00:12:28.132 direct=1 00:12:28.132 bs=4096 00:12:28.132 iodepth=1 00:12:28.132 norandommap=0 00:12:28.132 numjobs=1 00:12:28.132 00:12:28.132 verify_dump=1 00:12:28.132 verify_backlog=512 00:12:28.132 verify_state_save=0 00:12:28.132 do_verify=1 00:12:28.132 verify=crc32c-intel 00:12:28.132 [job0] 00:12:28.132 filename=/dev/nvme0n1 00:12:28.132 [job1] 00:12:28.132 filename=/dev/nvme0n2 00:12:28.132 [job2] 00:12:28.132 filename=/dev/nvme0n3 00:12:28.132 [job3] 00:12:28.132 filename=/dev/nvme0n4 00:12:28.132 Could not set queue depth (nvme0n1) 00:12:28.132 Could not set queue depth (nvme0n2) 00:12:28.132 Could not set queue depth (nvme0n3) 00:12:28.132 Could not set queue depth (nvme0n4) 00:12:28.132 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.132 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.132 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.132 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.132 fio-3.35 00:12:28.132 Starting 4 threads 00:12:29.505 00:12:29.505 job0: (groupid=0, jobs=1): err= 0: pid=3024168: Wed Nov 6 15:16:56 2024 00:12:29.505 read: IOPS=2568, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:29.505 slat (nsec): min=8397, max=31822, avg=9220.09, stdev=1137.47 00:12:29.505 clat (usec): min=95, max=248, avg=170.12, stdev=13.31 00:12:29.505 lat (usec): min=104, max=257, avg=179.34, stdev=13.33 00:12:29.505 clat percentiles (usec): 00:12:29.505 | 1.00th=[ 103], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:12:29.505 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 172], 00:12:29.505 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:12:29.505 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 243], 99.95th=[ 245], 00:12:29.505 | 99.99th=[ 249] 00:12:29.505 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:29.505 slat (nsec): min=10181, max=51398, avg=11427.45, stdev=1579.99 00:12:29.505 clat (usec): min=85, max=236, avg=159.85, stdev=14.49 00:12:29.505 lat (usec): min=97, max=278, avg=171.28, stdev=14.65 00:12:29.505 clat percentiles (usec): 00:12:29.505 | 1.00th=[ 96], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:12:29.505 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:29.505 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:12:29.505 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 233], 99.95th=[ 237], 00:12:29.505 | 99.99th=[ 237] 00:12:29.505 bw ( KiB/s): min=12288, max=12288, per=24.02%, avg=12288.00, stdev= 0.00, samples=1 00:12:29.505 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:29.505 lat (usec) : 100=1.13%, 250=98.87% 00:12:29.505 cpu : usr=3.70%, sys=5.80%, ctx=5643, majf=0, minf=1 00:12:29.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.505 issued rwts: total=2571,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.505 job1: (groupid=0, jobs=1): err= 0: pid=3024170: Wed Nov 6 15:16:56 2024 00:12:29.505 read: IOPS=2579, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:12:29.505 slat (nsec): min=8481, max=64968, avg=9332.49, stdev=1751.11 00:12:29.505 clat (usec): min=82, max=604, avg=169.00, stdev=17.24 00:12:29.505 lat (usec): min=91, max=613, avg=178.33, stdev=17.13 00:12:29.505 clat percentiles (usec): 00:12:29.505 | 1.00th=[ 97], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 165], 00:12:29.505 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 172], 00:12:29.505 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:12:29.505 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 237], 99.95th=[ 239], 00:12:29.505 | 99.99th=[ 603] 00:12:29.505 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:29.505 slat (nsec): min=10299, max=44483, avg=11499.26, stdev=1276.75 00:12:29.505 clat (usec): min=77, max=233, avg=159.94, stdev=12.45 00:12:29.505 lat (usec): min=87, max=271, avg=171.44, stdev=12.55 00:12:29.505 clat percentiles (usec): 00:12:29.505 | 1.00th=[ 111], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:12:29.505 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:29.505 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 180], 00:12:29.505 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 233], 00:12:29.505 | 99.99th=[ 233] 00:12:29.505 bw ( KiB/s): min=12288, max=12288, per=24.02%, avg=12288.00, stdev= 0.00, samples=1 00:12:29.505 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:29.505 lat (usec) : 100=0.78%, 250=99.20%, 750=0.02% 00:12:29.505 cpu : usr=2.60%, sys=7.00%, ctx=5655, majf=0, minf=1 00:12:29.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.505 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.505 job2: (groupid=0, jobs=1): err= 0: pid=3024177: Wed Nov 6 15:16:56 2024 00:12:29.505 read: IOPS=2579, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:12:29.506 slat (nsec): min=8545, max=32203, avg=9232.28, stdev=1190.77 00:12:29.506 clat (usec): min=90, max=265, avg=170.06, stdev=13.04 00:12:29.506 lat (usec): min=100, max=274, avg=179.29, stdev=13.11 00:12:29.506 clat percentiles (usec): 00:12:29.506 | 1.00th=[ 108], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:12:29.506 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 172], 00:12:29.506 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:12:29.506 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 231], 00:12:29.506 | 99.99th=[ 265] 00:12:29.506 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:29.506 slat (nsec): min=10337, max=46092, avg=11401.38, stdev=1366.30 00:12:29.506 clat (usec): min=85, max=249, avg=159.26, stdev=14.07 00:12:29.506 lat (usec): min=96, max=260, avg=170.66, stdev=14.14 00:12:29.506 clat percentiles (usec): 00:12:29.506 | 1.00th=[ 99], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 155], 00:12:29.506 | 30.00th=[ 157], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:29.506 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 180], 00:12:29.506 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 225], 99.95th=[ 235], 00:12:29.506 | 99.99th=[ 249] 00:12:29.506 bw ( KiB/s): min=12288, max=12288, per=24.02%, avg=12288.00, stdev= 0.00, samples=1 00:12:29.506 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:29.506 lat (usec) : 100=0.76%, 250=99.22%, 500=0.02% 00:12:29.506 cpu : usr=2.90%, sys=6.60%, ctx=5655, majf=0, minf=1 00:12:29.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.506 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.506 job3: (groupid=0, jobs=1): err= 0: pid=3024178: Wed Nov 6 15:16:56 2024 00:12:29.506 read: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:12:29.506 slat (nsec): min=8559, max=56569, avg=9289.21, stdev=1597.01 00:12:29.506 clat (usec): min=83, max=252, avg=136.89, stdev=36.45 00:12:29.506 lat (usec): min=92, max=262, avg=146.18, stdev=36.59 00:12:29.506 clat percentiles (usec): 00:12:29.506 | 1.00th=[ 88], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:12:29.506 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 159], 60.00th=[ 165], 00:12:29.506 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 180], 00:12:29.506 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 241], 99.95th=[ 249], 00:12:29.506 | 99.99th=[ 253] 00:12:29.506 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:29.506 slat (nsec): min=10445, max=39790, avg=11666.21, stdev=1522.01 00:12:29.506 clat (usec): min=78, max=203, avg=128.33, stdev=34.27 00:12:29.506 lat (usec): min=90, max=214, avg=139.99, stdev=34.29 00:12:29.506 clat percentiles (usec): 00:12:29.506 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 93], 00:12:29.506 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 137], 60.00th=[ 157], 00:12:29.506 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 172], 00:12:29.506 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 202], 00:12:29.506 | 99.99th=[ 204] 00:12:29.506 bw ( KiB/s): min=14672, max=14672, per=28.68%, avg=14672.00, stdev= 0.00, samples=1 00:12:29.506 iops : min= 3668, max= 3668, avg=3668.00, stdev= 0.00, samples=1 00:12:29.506 lat (usec) : 100=33.93%, 250=66.05%, 500=0.01% 00:12:29.506 cpu : usr=5.00%, sys=6.80%, ctx=6875, majf=0, minf=1 00:12:29.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.506 issued rwts: total=3291,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.506 00:12:29.506 Run status group 0 (all jobs): 00:12:29.506 READ: bw=43.0MiB/s (45.1MB/s), 10.0MiB/s-12.8MiB/s (10.5MB/s-13.5MB/s), io=43.1MiB (45.2MB), run=1001-1001msec 00:12:29.506 WRITE: bw=49.9MiB/s (52.4MB/s), 12.0MiB/s-14.0MiB/s (12.6MB/s-14.7MB/s), io=50.0MiB (52.4MB), run=1001-1001msec 00:12:29.506 00:12:29.506 Disk stats (read/write): 00:12:29.506 nvme0n1: ios=2294/2560, merge=0/0, ticks=381/394, in_queue=775, util=85.97% 00:12:29.506 nvme0n2: ios=2242/2560, merge=0/0, ticks=363/393, in_queue=756, util=86.48% 00:12:29.506 nvme0n3: ios=2254/2560, merge=0/0, ticks=363/398, in_queue=761, util=88.83% 00:12:29.506 nvme0n4: ios=2613/3072, merge=0/0, ticks=347/389, in_queue=736, util=89.67% 00:12:29.506 15:16:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:29.506 [global] 00:12:29.506 thread=1 00:12:29.506 invalidate=1 00:12:29.506 rw=write 00:12:29.506 time_based=1 00:12:29.506 runtime=1 00:12:29.506 ioengine=libaio 00:12:29.506 direct=1 00:12:29.506 bs=4096 00:12:29.506 iodepth=128 00:12:29.506 norandommap=0 00:12:29.506 numjobs=1 00:12:29.506 00:12:29.506 verify_dump=1 00:12:29.506 verify_backlog=512 00:12:29.506 verify_state_save=0 00:12:29.506 do_verify=1 00:12:29.506 verify=crc32c-intel 00:12:29.506 [job0] 00:12:29.506 filename=/dev/nvme0n1 00:12:29.506 [job1] 00:12:29.506 filename=/dev/nvme0n2 00:12:29.506 [job2] 00:12:29.506 filename=/dev/nvme0n3 00:12:29.506 [job3] 00:12:29.506 filename=/dev/nvme0n4 00:12:29.506 Could not set queue depth (nvme0n1) 00:12:29.506 Could not set queue depth (nvme0n2) 00:12:29.506 Could not set queue depth (nvme0n3) 00:12:29.506 Could not set queue depth (nvme0n4) 00:12:29.764 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:29.764 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:29.764 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:29.764 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:29.764 fio-3.35 00:12:29.764 Starting 4 threads 00:12:31.136 00:12:31.136 job0: (groupid=0, jobs=1): err= 0: pid=3024474: Wed Nov 6 15:16:58 2024 00:12:31.136 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:12:31.136 slat (usec): min=2, max=6497, avg=83.44, stdev=393.21 00:12:31.136 clat (usec): min=4030, max=23183, avg=11099.18, stdev=4299.73 00:12:31.136 lat (usec): min=4034, max=23189, avg=11182.62, stdev=4321.66 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 4686], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6587], 00:12:31.136 | 30.00th=[ 7373], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[12518], 00:12:31.136 | 70.00th=[13829], 80.00th=[15008], 90.00th=[16712], 95.00th=[17957], 00:12:31.136 | 99.00th=[21365], 99.50th=[21627], 99.90th=[21890], 99.95th=[23200], 00:12:31.136 | 99.99th=[23200] 00:12:31.136 write: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(22.9MiB/1002msec); 0 zone resets 00:12:31.136 slat (usec): min=2, max=7239, avg=85.25, stdev=404.15 00:12:31.136 clat (usec): min=1385, max=29700, avg=10981.28, stdev=5613.18 00:12:31.136 lat (usec): min=3234, max=29705, avg=11066.52, stdev=5646.37 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6521], 00:12:31.136 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 8586], 60.00th=[10290], 00:12:31.136 | 70.00th=[13042], 80.00th=[15664], 90.00th=[20317], 95.00th=[22414], 00:12:31.136 | 99.00th=[26346], 99.50th=[26608], 99.90th=[29754], 99.95th=[29754], 00:12:31.136 | 99.99th=[29754] 00:12:31.136 bw ( KiB/s): min=20480, max=25480, per=25.57%, avg=22980.00, stdev=3535.53, samples=2 00:12:31.136 iops : min= 5120, max= 6370, avg=5745.00, stdev=883.88, samples=2 00:12:31.136 lat (msec) : 2=0.01%, 4=0.07%, 10=51.01%, 20=42.19%, 50=6.72% 00:12:31.136 cpu : usr=3.80%, sys=5.79%, ctx=1154, majf=0, minf=1 00:12:31.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:31.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:31.136 issued rwts: total=5632,5872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:31.136 job1: (groupid=0, jobs=1): err= 0: pid=3024475: Wed Nov 6 15:16:58 2024 00:12:31.136 read: IOPS=5927, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1001msec) 00:12:31.136 slat (nsec): min=1995, max=7429.8k, avg=82178.05, stdev=418738.49 00:12:31.136 clat (usec): min=408, max=24928, avg=10482.85, stdev=4679.97 00:12:31.136 lat (usec): min=1272, max=24941, avg=10565.03, stdev=4707.99 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 3982], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6718], 00:12:31.136 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8717], 60.00th=[10290], 00:12:31.136 | 70.00th=[12387], 80.00th=[14615], 90.00th=[17695], 95.00th=[20055], 00:12:31.136 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:12:31.136 | 99.99th=[25035] 00:12:31.136 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:12:31.136 slat (usec): min=2, max=7200, avg=78.48, stdev=382.44 00:12:31.136 clat (usec): min=2905, max=25323, avg=10513.91, stdev=5176.74 00:12:31.136 lat (usec): min=2916, max=25334, avg=10592.39, stdev=5209.61 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 4146], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6128], 00:12:31.136 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7832], 60.00th=[10814], 00:12:31.136 | 70.00th=[13566], 80.00th=[15664], 90.00th=[19006], 95.00th=[20841], 00:12:31.136 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:12:31.136 | 99.99th=[25297] 00:12:31.136 bw ( KiB/s): min=24576, max=24576, per=27.35%, avg=24576.00, stdev= 0.00, samples=1 00:12:31.136 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:12:31.136 lat (usec) : 500=0.01% 00:12:31.136 lat (msec) : 2=0.01%, 4=0.96%, 10=57.24%, 20=35.02%, 50=6.76% 00:12:31.136 cpu : usr=3.70%, sys=6.30%, ctx=1116, majf=0, minf=1 00:12:31.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:31.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:31.136 issued rwts: total=5933,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:31.136 job2: (groupid=0, jobs=1): err= 0: pid=3024477: Wed Nov 6 15:16:58 2024 00:12:31.136 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:31.136 slat (usec): min=2, max=6674, avg=97.46, stdev=406.03 00:12:31.136 clat (usec): min=3988, max=25234, avg=12401.80, stdev=4553.18 00:12:31.136 lat (usec): min=3997, max=25243, avg=12499.25, stdev=4574.13 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 5407], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7898], 00:12:31.136 | 30.00th=[ 8848], 40.00th=[10290], 50.00th=[11731], 60.00th=[13435], 00:12:31.136 | 70.00th=[14746], 80.00th=[16909], 90.00th=[19006], 95.00th=[20317], 00:12:31.136 | 99.00th=[22414], 99.50th=[23200], 99.90th=[25297], 99.95th=[25297], 00:12:31.136 | 99.99th=[25297] 00:12:31.136 write: IOPS=5616, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:31.136 slat (usec): min=2, max=5030, avg=84.20, stdev=339.22 00:12:31.136 clat (usec): min=477, max=24212, avg=11202.99, stdev=4664.28 00:12:31.136 lat (usec): min=3552, max=24223, avg=11287.19, stdev=4685.26 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6915], 00:12:31.136 | 30.00th=[ 7832], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[11207], 00:12:31.136 | 70.00th=[13435], 80.00th=[16057], 90.00th=[18220], 95.00th=[19792], 00:12:31.136 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24249], 99.95th=[24249], 00:12:31.136 | 99.99th=[24249] 00:12:31.136 bw ( KiB/s): min=20480, max=20480, per=22.79%, avg=20480.00, stdev= 0.00, samples=1 00:12:31.136 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:31.136 lat (usec) : 500=0.01% 00:12:31.136 lat (msec) : 4=0.18%, 10=46.59%, 20=48.05%, 50=5.18% 00:12:31.136 cpu : usr=3.60%, sys=5.69%, ctx=1123, majf=0, minf=1 00:12:31.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:31.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:31.136 issued rwts: total=5120,5628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:31.136 job3: (groupid=0, jobs=1): err= 0: pid=3024478: Wed Nov 6 15:16:58 2024 00:12:31.136 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:31.136 slat (usec): min=2, max=6086, avg=105.18, stdev=480.47 00:12:31.136 clat (usec): min=4328, max=28453, avg=14293.90, stdev=4591.76 00:12:31.136 lat (usec): min=4348, max=29105, avg=14399.07, stdev=4616.93 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 8291], 20.00th=[10290], 00:12:31.136 | 30.00th=[11994], 40.00th=[12911], 50.00th=[14222], 60.00th=[15139], 00:12:31.136 | 70.00th=[16188], 80.00th=[17957], 90.00th=[20579], 95.00th=[22676], 00:12:31.136 | 99.00th=[27132], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:12:31.136 | 99.99th=[28443] 00:12:31.136 write: IOPS=4864, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 00:12:31.136 slat (usec): min=2, max=4859, avg=100.54, stdev=420.33 00:12:31.136 clat (usec): min=437, max=22346, avg=12444.83, stdev=3829.14 00:12:31.136 lat (usec): min=1400, max=22780, avg=12545.37, stdev=3852.85 00:12:31.136 clat percentiles (usec): 00:12:31.136 | 1.00th=[ 3687], 5.00th=[ 6063], 10.00th=[ 7242], 20.00th=[ 9241], 00:12:31.136 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12387], 60.00th=[13173], 00:12:31.136 | 70.00th=[14877], 80.00th=[15926], 90.00th=[17171], 95.00th=[18744], 00:12:31.136 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 00:12:31.136 | 99.99th=[22414] 00:12:31.136 bw ( KiB/s): min=20480, max=20480, per=22.79%, avg=20480.00, stdev= 0.00, samples=1 00:12:31.136 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:31.136 lat (usec) : 500=0.01% 00:12:31.136 lat (msec) : 2=0.15%, 4=0.36%, 10=22.69%, 20=69.88%, 50=6.91% 00:12:31.136 cpu : usr=2.30%, sys=6.50%, ctx=967, majf=0, minf=1 00:12:31.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:31.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:31.136 issued rwts: total=4608,4869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:31.136 00:12:31.136 Run status group 0 (all jobs): 00:12:31.137 READ: bw=83.0MiB/s (87.0MB/s), 18.0MiB/s-23.2MiB/s (18.9MB/s-24.3MB/s), io=83.2MiB (87.2MB), run=1001-1002msec 00:12:31.137 WRITE: bw=87.8MiB/s (92.0MB/s), 19.0MiB/s-24.0MiB/s (19.9MB/s-25.1MB/s), io=87.9MiB (92.2MB), run=1001-1002msec 00:12:31.137 00:12:31.137 Disk stats (read/write): 00:12:31.137 nvme0n1: ios=5139/5120, merge=0/0, ticks=17362/16998, in_queue=34360, util=85.57% 00:12:31.137 nvme0n2: ios=4608/4808, merge=0/0, ticks=17038/16527, in_queue=33565, util=85.87% 00:12:31.137 nvme0n3: ios=4605/4608, merge=0/0, ticks=16921/15861, in_queue=32782, util=88.11% 00:12:31.137 nvme0n4: ios=3976/4096, merge=0/0, ticks=18946/17379, in_queue=36325, util=88.95% 00:12:31.137 15:16:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:31.137 [global] 00:12:31.137 thread=1 00:12:31.137 invalidate=1 00:12:31.137 rw=randwrite 00:12:31.137 time_based=1 00:12:31.137 runtime=1 00:12:31.137 ioengine=libaio 00:12:31.137 direct=1 00:12:31.137 bs=4096 00:12:31.137 iodepth=128 00:12:31.137 norandommap=0 00:12:31.137 numjobs=1 00:12:31.137 00:12:31.137 verify_dump=1 00:12:31.137 verify_backlog=512 00:12:31.137 verify_state_save=0 00:12:31.137 do_verify=1 00:12:31.137 verify=crc32c-intel 00:12:31.137 [job0] 00:12:31.137 filename=/dev/nvme0n1 00:12:31.137 [job1] 00:12:31.137 filename=/dev/nvme0n2 00:12:31.137 [job2] 00:12:31.137 filename=/dev/nvme0n3 00:12:31.137 [job3] 00:12:31.137 filename=/dev/nvme0n4 00:12:31.137 Could not set queue depth (nvme0n1) 00:12:31.137 Could not set queue depth (nvme0n2) 00:12:31.137 Could not set queue depth (nvme0n3) 00:12:31.137 Could not set queue depth (nvme0n4) 00:12:31.393 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.394 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.394 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.394 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.394 fio-3.35 00:12:31.394 Starting 4 threads 00:12:32.764 00:12:32.764 job0: (groupid=0, jobs=1): err= 0: pid=3024778: Wed Nov 6 15:17:00 2024 00:12:32.764 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:12:32.764 slat (usec): min=2, max=4913, avg=98.63, stdev=463.53 00:12:32.764 clat (usec): min=3304, max=24391, avg=13180.73, stdev=4012.11 00:12:32.764 lat (usec): min=3308, max=24537, avg=13279.36, stdev=4023.38 00:12:32.764 clat percentiles (usec): 00:12:32.764 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 7832], 20.00th=[ 9372], 00:12:32.764 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12911], 60.00th=[14484], 00:12:32.764 | 70.00th=[16057], 80.00th=[16909], 90.00th=[18220], 95.00th=[19530], 00:12:32.764 | 99.00th=[21365], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:12:32.764 | 99.99th=[24511] 00:12:32.764 write: IOPS=5085, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1002msec); 0 zone resets 00:12:32.764 slat (usec): min=2, max=5196, avg=102.02, stdev=491.41 00:12:32.764 clat (usec): min=489, max=22394, avg=12963.63, stdev=4248.08 00:12:32.764 lat (usec): min=1628, max=22403, avg=13065.65, stdev=4256.70 00:12:32.764 clat percentiles (usec): 00:12:32.764 | 1.00th=[ 3884], 5.00th=[ 5866], 10.00th=[ 6915], 20.00th=[ 8717], 00:12:32.764 | 30.00th=[10159], 40.00th=[11994], 50.00th=[13698], 60.00th=[14877], 00:12:32.764 | 70.00th=[15533], 80.00th=[16188], 90.00th=[18744], 95.00th=[19792], 00:12:32.764 | 99.00th=[21103], 99.50th=[22414], 99.90th=[22414], 99.95th=[22414], 00:12:32.764 | 99.99th=[22414] 00:12:32.764 bw ( KiB/s): min=19272, max=20480, per=21.27%, avg=19876.00, stdev=854.18, samples=2 00:12:32.764 iops : min= 4818, max= 5120, avg=4969.00, stdev=213.55, samples=2 00:12:32.764 lat (usec) : 500=0.01% 00:12:32.764 lat (msec) : 2=0.01%, 4=1.01%, 10=25.50%, 20=70.56%, 50=2.91% 00:12:32.764 cpu : usr=3.20%, sys=5.19%, ctx=850, majf=0, minf=2 00:12:32.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:32.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.764 issued rwts: total=4608,5096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.764 job1: (groupid=0, jobs=1): err= 0: pid=3024779: Wed Nov 6 15:17:00 2024 00:12:32.764 read: IOPS=6890, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1003msec) 00:12:32.764 slat (usec): min=2, max=6913, avg=70.10, stdev=353.68 00:12:32.764 clat (usec): min=1917, max=26567, avg=8981.93, stdev=4444.50 00:12:32.764 lat (usec): min=2620, max=26578, avg=9052.03, stdev=4470.03 00:12:32.764 clat percentiles (usec): 00:12:32.764 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6063], 00:12:32.765 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 7963], 00:12:32.765 | 70.00th=[ 8848], 80.00th=[10421], 90.00th=[17957], 95.00th=[19530], 00:12:32.765 | 99.00th=[23462], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:12:32.765 | 99.99th=[26608] 00:12:32.765 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:12:32.765 slat (usec): min=2, max=5242, avg=67.49, stdev=321.11 00:12:32.765 clat (usec): min=2431, max=23494, avg=9004.19, stdev=4460.88 00:12:32.765 lat (usec): min=2435, max=23504, avg=9071.69, stdev=4486.22 00:12:32.765 clat percentiles (usec): 00:12:32.765 | 1.00th=[ 3490], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5735], 00:12:32.765 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 8094], 00:12:32.765 | 70.00th=[ 9372], 80.00th=[11207], 90.00th=[18744], 95.00th=[19268], 00:12:32.765 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20317], 00:12:32.765 | 99.99th=[23462] 00:12:32.765 bw ( KiB/s): min=22688, max=34656, per=30.68%, avg=28672.00, stdev=8462.65, samples=2 00:12:32.765 iops : min= 5672, max= 8664, avg=7168.00, stdev=2115.66, samples=2 00:12:32.765 lat (msec) : 2=0.01%, 4=1.51%, 10=74.41%, 20=22.06%, 50=2.02% 00:12:32.765 cpu : usr=4.49%, sys=5.69%, ctx=1078, majf=0, minf=1 00:12:32.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:32.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.765 issued rwts: total=6911,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.765 job2: (groupid=0, jobs=1): err= 0: pid=3024780: Wed Nov 6 15:17:00 2024 00:12:32.765 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:32.765 slat (usec): min=2, max=5092, avg=118.04, stdev=517.31 00:12:32.765 clat (usec): min=4778, max=25557, avg=15137.85, stdev=3226.43 00:12:32.765 lat (usec): min=4878, max=25561, avg=15255.90, stdev=3227.67 00:12:32.765 clat percentiles (usec): 00:12:32.765 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12256], 00:12:32.765 | 30.00th=[13435], 40.00th=[14484], 50.00th=[15270], 60.00th=[15926], 00:12:32.765 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19792], 95.00th=[20579], 00:12:32.765 | 99.00th=[21627], 99.50th=[21890], 99.90th=[25560], 99.95th=[25560], 00:12:32.765 | 99.99th=[25560] 00:12:32.765 write: IOPS=4335, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1002msec); 0 zone resets 00:12:32.765 slat (usec): min=2, max=8696, avg=113.61, stdev=472.56 00:12:32.765 clat (usec): min=509, max=26766, avg=14893.89, stdev=4232.20 00:12:32.765 lat (usec): min=1441, max=26791, avg=15007.50, stdev=4243.42 00:12:32.765 clat percentiles (usec): 00:12:32.765 | 1.00th=[ 3425], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[11076], 00:12:32.765 | 30.00th=[13698], 40.00th=[14615], 50.00th=[15401], 60.00th=[16057], 00:12:32.765 | 70.00th=[16712], 80.00th=[18744], 90.00th=[19530], 95.00th=[20317], 00:12:32.765 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26346], 99.95th=[26870], 00:12:32.765 | 99.99th=[26870] 00:12:32.765 bw ( KiB/s): min=16384, max=17352, per=18.05%, avg=16868.00, stdev=684.48, samples=2 00:12:32.765 iops : min= 4096, max= 4338, avg=4217.00, stdev=171.12, samples=2 00:12:32.765 lat (usec) : 750=0.01% 00:12:32.765 lat (msec) : 2=0.14%, 4=0.43%, 10=10.81%, 20=81.86%, 50=6.75% 00:12:32.765 cpu : usr=2.70%, sys=5.09%, ctx=906, majf=0, minf=1 00:12:32.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:32.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.765 issued rwts: total=4096,4344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.765 job3: (groupid=0, jobs=1): err= 0: pid=3024781: Wed Nov 6 15:17:00 2024 00:12:32.765 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:12:32.765 slat (usec): min=2, max=5306, avg=75.36, stdev=357.22 00:12:32.765 clat (usec): min=3607, max=24683, avg=9806.96, stdev=3019.10 00:12:32.765 lat (usec): min=4150, max=24688, avg=9882.32, stdev=3040.57 00:12:32.765 clat percentiles (usec): 00:12:32.765 | 1.00th=[ 5342], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 7701], 00:12:32.765 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:12:32.765 | 70.00th=[10290], 80.00th=[11207], 90.00th=[13698], 95.00th=[16909], 00:12:32.765 | 99.00th=[19530], 99.50th=[22938], 99.90th=[24773], 99.95th=[24773], 00:12:32.765 | 99.99th=[24773] 00:12:32.765 write: IOPS=6811, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1002msec); 0 zone resets 00:12:32.765 slat (usec): min=2, max=4588, avg=68.37, stdev=289.60 00:12:32.765 clat (usec): min=1905, max=22210, avg=9012.37, stdev=2902.56 00:12:32.765 lat (usec): min=1923, max=22214, avg=9080.75, stdev=2914.33 00:12:32.765 clat percentiles (usec): 00:12:32.765 | 1.00th=[ 4293], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7177], 00:12:32.765 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8717], 00:12:32.765 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[12518], 95.00th=[14877], 00:12:32.765 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21103], 99.95th=[22152], 00:12:32.765 | 99.99th=[22152] 00:12:32.765 bw ( KiB/s): min=24912, max=28672, per=28.67%, avg=26792.00, stdev=2658.72, samples=2 00:12:32.765 iops : min= 6228, max= 7168, avg=6698.00, stdev=664.68, samples=2 00:12:32.765 lat (msec) : 2=0.01%, 4=0.21%, 10=72.39%, 20=26.40%, 50=0.99% 00:12:32.765 cpu : usr=3.90%, sys=6.29%, ctx=1061, majf=0, minf=1 00:12:32.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:32.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.765 issued rwts: total=6656,6825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.765 00:12:32.765 Run status group 0 (all jobs): 00:12:32.765 READ: bw=86.7MiB/s (90.9MB/s), 16.0MiB/s-26.9MiB/s (16.7MB/s-28.2MB/s), io=87.0MiB (91.2MB), run=1002-1003msec 00:12:32.765 WRITE: bw=91.3MiB/s (95.7MB/s), 16.9MiB/s-27.9MiB/s (17.8MB/s-29.3MB/s), io=91.5MiB (96.0MB), run=1002-1003msec 00:12:32.765 00:12:32.765 Disk stats (read/write): 00:12:32.765 nvme0n1: ios=3864/4096, merge=0/0, ticks=14497/15575, in_queue=30072, util=84.87% 00:12:32.765 nvme0n2: ios=5632/5821, merge=0/0, ticks=16040/15732, in_queue=31772, util=85.83% 00:12:32.765 nvme0n3: ios=3210/3584, merge=0/0, ticks=14433/14405, in_queue=28838, util=88.63% 00:12:32.765 nvme0n4: ios=5632/5855, merge=0/0, ticks=15992/14331, in_queue=30323, util=89.07% 00:12:32.765 15:17:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:32.765 15:17:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3024967 00:12:32.765 15:17:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:32.765 15:17:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:32.765 [global] 00:12:32.765 thread=1 00:12:32.765 invalidate=1 00:12:32.765 rw=read 00:12:32.765 time_based=1 00:12:32.765 runtime=10 00:12:32.765 ioengine=libaio 00:12:32.765 direct=1 00:12:32.765 bs=4096 00:12:32.765 iodepth=1 00:12:32.765 norandommap=1 00:12:32.765 numjobs=1 00:12:32.765 00:12:32.765 [job0] 00:12:32.765 filename=/dev/nvme0n1 00:12:32.765 [job1] 00:12:32.765 filename=/dev/nvme0n2 00:12:32.765 [job2] 00:12:32.765 filename=/dev/nvme0n3 00:12:32.765 [job3] 00:12:32.765 filename=/dev/nvme0n4 00:12:32.765 Could not set queue depth (nvme0n1) 00:12:32.765 Could not set queue depth (nvme0n2) 00:12:32.765 Could not set queue depth (nvme0n3) 00:12:32.765 Could not set queue depth (nvme0n4) 00:12:33.022 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.022 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.022 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.022 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:33.022 fio-3.35 00:12:33.022 Starting 4 threads 00:12:35.544 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:35.801 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=78438400, buflen=4096 00:12:35.801 fio: pid=3025083, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:35.801 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:36.059 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=77910016, buflen=4096 00:12:36.059 fio: pid=3025082, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:36.059 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:36.059 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:36.316 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15138816, buflen=4096 00:12:36.316 fio: pid=3025080, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:36.316 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:36.316 15:17:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:36.574 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=39649280, buflen=4096 00:12:36.574 fio: pid=3025081, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:36.574 00:12:36.574 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3025080: Wed Nov 6 15:17:04 2024 00:12:36.574 read: IOPS=6399, BW=25.0MiB/s (26.2MB/s)(78.4MiB/3138msec) 00:12:36.574 slat (usec): min=7, max=15796, avg=10.92, stdev=142.68 00:12:36.574 clat (usec): min=54, max=401, avg=143.64, stdev=33.46 00:12:36.574 lat (usec): min=68, max=15958, avg=154.56, stdev=146.64 00:12:36.574 clat percentiles (usec): 00:12:36.574 | 1.00th=[ 70], 5.00th=[ 85], 10.00th=[ 91], 20.00th=[ 126], 00:12:36.574 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:12:36.574 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:12:36.574 | 99.00th=[ 227], 99.50th=[ 245], 99.90th=[ 285], 99.95th=[ 326], 00:12:36.574 | 99.99th=[ 363] 00:12:36.574 bw ( KiB/s): min=21408, max=29939, per=26.66%, avg=25416.50, stdev=2993.71, samples=6 00:12:36.574 iops : min= 5352, max= 7484, avg=6354.00, stdev=748.20, samples=6 00:12:36.574 lat (usec) : 100=14.37%, 250=85.22%, 500=0.40% 00:12:36.574 cpu : usr=2.01%, sys=7.24%, ctx=20085, majf=0, minf=1 00:12:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.574 issued rwts: total=20081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.574 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3025081: Wed Nov 6 15:17:04 2024 00:12:36.574 read: IOPS=7367, BW=28.8MiB/s (30.2MB/s)(102MiB/3538msec) 00:12:36.574 slat (usec): min=3, max=15809, avg=11.03, stdev=144.29 00:12:36.574 clat (usec): min=56, max=682, avg=122.98, stdev=45.40 00:12:36.574 lat (usec): min=64, max=15916, avg=134.01, stdev=151.08 00:12:36.574 clat percentiles (usec): 00:12:36.574 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 82], 00:12:36.574 | 30.00th=[ 88], 40.00th=[ 95], 50.00th=[ 114], 60.00th=[ 143], 00:12:36.574 | 70.00th=[ 151], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 196], 00:12:36.574 | 99.00th=[ 219], 99.50th=[ 243], 99.90th=[ 281], 99.95th=[ 330], 00:12:36.574 | 99.99th=[ 420] 00:12:36.574 bw ( KiB/s): min=21512, max=37170, per=28.02%, avg=26712.33, stdev=5807.47, samples=6 00:12:36.574 iops : min= 5378, max= 9292, avg=6678.00, stdev=1451.69, samples=6 00:12:36.574 lat (usec) : 100=45.89%, 250=53.74%, 500=0.36%, 750=0.01% 00:12:36.574 cpu : usr=2.71%, sys=7.92%, ctx=26073, majf=0, minf=2 00:12:36.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.574 issued rwts: total=26065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.574 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3025082: Wed Nov 6 15:17:04 2024 00:12:36.574 read: IOPS=6527, BW=25.5MiB/s (26.7MB/s)(74.3MiB/2914msec) 00:12:36.574 slat (usec): min=8, max=7774, avg= 9.96, stdev=79.56 00:12:36.574 clat (usec): min=69, max=349, avg=141.43, stdev=31.61 00:12:36.574 lat (usec): min=78, max=7981, avg=151.38, stdev=86.11 00:12:36.574 clat percentiles (usec): 00:12:36.574 | 1.00th=[ 88], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 104], 00:12:36.574 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:12:36.575 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 186], 95.00th=[ 194], 00:12:36.575 | 99.00th=[ 221], 99.50th=[ 239], 99.90th=[ 258], 99.95th=[ 260], 00:12:36.575 | 99.99th=[ 310] 00:12:36.575 bw ( KiB/s): min=21376, max=34432, per=27.62%, avg=26326.40, stdev=4864.55, samples=5 00:12:36.575 iops : min= 5344, max= 8608, avg=6581.60, stdev=1216.14, samples=5 00:12:36.575 lat (usec) : 100=16.96%, 250=82.79%, 500=0.24% 00:12:36.575 cpu : usr=2.64%, sys=6.83%, ctx=19024, majf=0, minf=2 00:12:36.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.575 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.575 issued rwts: total=19022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.575 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3025083: Wed Nov 6 15:17:04 2024 00:12:36.575 read: IOPS=7090, BW=27.7MiB/s (29.0MB/s)(74.8MiB/2701msec) 00:12:36.575 slat (nsec): min=8465, max=38762, avg=9248.04, stdev=1052.95 00:12:36.575 clat (usec): min=82, max=390, avg=129.92, stdev=29.55 00:12:36.575 lat (usec): min=91, max=399, avg=139.17, stdev=29.64 00:12:36.575 clat percentiles (usec): 00:12:36.575 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:12:36.575 | 30.00th=[ 103], 40.00th=[ 129], 50.00th=[ 137], 60.00th=[ 141], 00:12:36.575 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 169], 95.00th=[ 180], 00:12:36.575 | 99.00th=[ 210], 99.50th=[ 233], 99.90th=[ 318], 99.95th=[ 355], 00:12:36.575 | 99.99th=[ 379] 00:12:36.575 bw ( KiB/s): min=22688, max=34000, per=29.32%, avg=27945.60, stdev=4112.07, samples=5 00:12:36.575 iops : min= 5672, max= 8500, avg=6986.40, stdev=1028.02, samples=5 00:12:36.575 lat (usec) : 100=24.52%, 250=75.14%, 500=0.33% 00:12:36.575 cpu : usr=2.56%, sys=7.85%, ctx=19151, majf=0, minf=2 00:12:36.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.575 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.575 issued rwts: total=19151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.575 00:12:36.575 Run status group 0 (all jobs): 00:12:36.575 READ: bw=93.1MiB/s (97.6MB/s), 25.0MiB/s-28.8MiB/s (26.2MB/s-30.2MB/s), io=329MiB (345MB), run=2701-3538msec 00:12:36.575 00:12:36.575 Disk stats (read/write): 00:12:36.575 nvme0n1: ios=19795/0, merge=0/0, ticks=2763/0, in_queue=2763, util=94.58% 00:12:36.575 nvme0n2: ios=23858/0, merge=0/0, ticks=2920/0, in_queue=2920, util=94.94% 00:12:36.575 nvme0n3: ios=18643/0, merge=0/0, ticks=2536/0, in_queue=2536, util=96.04% 00:12:36.575 nvme0n4: ios=18340/0, merge=0/0, ticks=2300/0, in_queue=2300, util=96.48% 00:12:36.832 15:17:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:36.832 15:17:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:37.089 15:17:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.089 15:17:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:37.653 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.653 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:37.910 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.910 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:38.474 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.474 15:17:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:38.731 15:17:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:38.731 15:17:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3024967 00:12:38.731 15:17:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:38.731 15:17:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:39.663 nvmf hotplug test: fio failed as expected 00:12:39.663 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:39.922 rmmod nvme_rdma 00:12:39.922 rmmod nvme_fabrics 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3022556 ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3022556 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3022556 ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3022556 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3022556 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3022556' 00:12:39.922 killing process with pid 3022556 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3022556 00:12:39.922 15:17:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3022556 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:41.821 00:12:41.821 real 0m30.613s 00:12:41.821 user 1m50.168s 00:12:41.821 sys 0m10.660s 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.821 ************************************ 00:12:41.821 END TEST nvmf_fio_target 00:12:41.821 ************************************ 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:41.821 ************************************ 00:12:41.821 START TEST nvmf_bdevio 00:12:41.821 ************************************ 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:41.821 * Looking for test storage... 00:12:41.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:12:41.821 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.081 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:42.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.082 --rc genhtml_branch_coverage=1 00:12:42.082 --rc genhtml_function_coverage=1 00:12:42.082 --rc genhtml_legend=1 00:12:42.082 --rc geninfo_all_blocks=1 00:12:42.082 --rc geninfo_unexecuted_blocks=1 00:12:42.082 00:12:42.082 ' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:42.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.082 --rc genhtml_branch_coverage=1 00:12:42.082 --rc genhtml_function_coverage=1 00:12:42.082 --rc genhtml_legend=1 00:12:42.082 --rc geninfo_all_blocks=1 00:12:42.082 --rc geninfo_unexecuted_blocks=1 00:12:42.082 00:12:42.082 ' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:42.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.082 --rc genhtml_branch_coverage=1 00:12:42.082 --rc genhtml_function_coverage=1 00:12:42.082 --rc genhtml_legend=1 00:12:42.082 --rc geninfo_all_blocks=1 00:12:42.082 --rc geninfo_unexecuted_blocks=1 00:12:42.082 00:12:42.082 ' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:42.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.082 --rc genhtml_branch_coverage=1 00:12:42.082 --rc genhtml_function_coverage=1 00:12:42.082 --rc genhtml_legend=1 00:12:42.082 --rc geninfo_all_blocks=1 00:12:42.082 --rc geninfo_unexecuted_blocks=1 00:12:42.082 00:12:42.082 ' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.082 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.082 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.083 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.083 15:17:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:48.653 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:48.653 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:48.653 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:48.654 Found net devices under 0000:18:00.0: mlx_0_0 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:48.654 Found net devices under 0000:18:00.1: mlx_0_1 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:48.654 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:48.914 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:48.914 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:48.915 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:48.915 altname enp24s0f0np0 00:12:48.915 altname ens785f0np0 00:12:48.915 inet 192.168.100.8/24 scope global mlx_0_0 00:12:48.915 valid_lft forever preferred_lft forever 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:48.915 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:48.915 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:48.915 altname enp24s0f1np1 00:12:48.915 altname ens785f1np1 00:12:48.915 inet 192.168.100.9/24 scope global mlx_0_1 00:12:48.915 valid_lft forever preferred_lft forever 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:48.915 192.168.100.9' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:48.915 192.168.100.9' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:48.915 192.168.100.9' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3029288 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3029288 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3029288 ']' 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.915 15:17:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:49.175 [2024-11-06 15:17:16.636178] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:49.175 [2024-11-06 15:17:16.636288] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.175 [2024-11-06 15:17:16.788652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.434 [2024-11-06 15:17:16.898283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.434 [2024-11-06 15:17:16.898338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.434 [2024-11-06 15:17:16.898351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.434 [2024-11-06 15:17:16.898364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.434 [2024-11-06 15:17:16.898374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.434 [2024-11-06 15:17:16.900630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:49.434 [2024-11-06 15:17:16.900712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:49.434 [2024-11-06 15:17:16.900773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.434 [2024-11-06 15:17:16.900799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.002 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.002 [2024-11-06 15:17:17.521344] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f53b27bd940) succeed. 00:12:50.002 [2024-11-06 15:17:17.531750] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f53b2779940) succeed. 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.261 Malloc0 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.261 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.521 [2024-11-06 15:17:17.922181] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:50.521 { 00:12:50.521 "params": { 00:12:50.521 "name": "Nvme$subsystem", 00:12:50.521 "trtype": "$TEST_TRANSPORT", 00:12:50.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.521 "adrfam": "ipv4", 00:12:50.521 "trsvcid": "$NVMF_PORT", 00:12:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.521 "hdgst": ${hdgst:-false}, 00:12:50.521 "ddgst": ${ddgst:-false} 00:12:50.521 }, 00:12:50.521 "method": "bdev_nvme_attach_controller" 00:12:50.521 } 00:12:50.521 EOF 00:12:50.521 )") 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:50.521 15:17:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:50.521 "params": { 00:12:50.521 "name": "Nvme1", 00:12:50.521 "trtype": "rdma", 00:12:50.521 "traddr": "192.168.100.8", 00:12:50.521 "adrfam": "ipv4", 00:12:50.521 "trsvcid": "4420", 00:12:50.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.521 "hdgst": false, 00:12:50.521 "ddgst": false 00:12:50.521 }, 00:12:50.521 "method": "bdev_nvme_attach_controller" 00:12:50.521 }' 00:12:50.521 [2024-11-06 15:17:18.011663] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:50.521 [2024-11-06 15:17:18.011766] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029492 ] 00:12:50.781 [2024-11-06 15:17:18.158889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.781 [2024-11-06 15:17:18.275226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.781 [2024-11-06 15:17:18.275239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.781 [2024-11-06 15:17:18.275272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.040 I/O targets: 00:12:51.040 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:51.040 00:12:51.040 00:12:51.040 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.040 http://cunit.sourceforge.net/ 00:12:51.040 00:12:51.040 00:12:51.040 Suite: bdevio tests on: Nvme1n1 00:12:51.299 Test: blockdev write read block ...passed 00:12:51.299 Test: blockdev write zeroes read block ...passed 00:12:51.299 Test: blockdev write zeroes read no split ...passed 00:12:51.299 Test: blockdev write zeroes read split ...passed 00:12:51.299 Test: blockdev write zeroes read split partial ...passed 00:12:51.299 Test: blockdev reset ...[2024-11-06 15:17:18.758805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:51.299 [2024-11-06 15:17:18.796995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:12:51.299 [2024-11-06 15:17:18.828476] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:51.299 passed 00:12:51.299 Test: blockdev write read 8 blocks ...passed 00:12:51.299 Test: blockdev write read size > 128k ...passed 00:12:51.299 Test: blockdev write read invalid size ...passed 00:12:51.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.299 Test: blockdev write read max offset ...passed 00:12:51.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.299 Test: blockdev writev readv 8 blocks ...passed 00:12:51.299 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.299 Test: blockdev writev readv block ...passed 00:12:51.299 Test: blockdev writev readv size > 128k ...passed 00:12:51.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.299 Test: blockdev comparev and writev ...[2024-11-06 15:17:18.837117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.837926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.299 [2024-11-06 15:17:18.837941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:51.299 passed 00:12:51.299 Test: blockdev nvme passthru rw ...passed 00:12:51.299 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:17:18.838343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:51.299 [2024-11-06 15:17:18.838368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.838423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:51.299 [2024-11-06 15:17:18.838440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.838506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:51.299 [2024-11-06 15:17:18.838526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:51.299 [2024-11-06 15:17:18.838585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:51.299 [2024-11-06 15:17:18.838603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:51.299 passed 00:12:51.299 Test: blockdev nvme admin passthru ...passed 00:12:51.299 Test: blockdev copy ...passed 00:12:51.299 00:12:51.300 Run Summary: Type Total Ran Passed Failed Inactive 00:12:51.300 suites 1 1 n/a 0 0 00:12:51.300 tests 23 23 23 0 0 00:12:51.300 asserts 152 152 152 0 n/a 00:12:51.300 00:12:51.300 Elapsed time = 0.405 seconds 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:52.235 rmmod nvme_rdma 00:12:52.235 rmmod nvme_fabrics 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3029288 ']' 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3029288 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3029288 ']' 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3029288 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:52.235 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3029288 00:12:52.494 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:52.494 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:52.494 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3029288' 00:12:52.494 killing process with pid 3029288 00:12:52.494 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3029288 00:12:52.494 15:17:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3029288 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:54.401 00:12:54.401 real 0m12.510s 00:12:54.401 user 0m23.590s 00:12:54.401 sys 0m6.343s 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.401 ************************************ 00:12:54.401 END TEST nvmf_bdevio 00:12:54.401 ************************************ 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:54.401 00:12:54.401 real 4m46.116s 00:12:54.401 user 12m0.597s 00:12:54.401 sys 1m44.639s 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.401 ************************************ 00:12:54.401 END TEST nvmf_target_core 00:12:54.401 ************************************ 00:12:54.401 15:17:21 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:54.401 15:17:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:54.401 15:17:21 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.401 15:17:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:54.401 ************************************ 00:12:54.401 START TEST nvmf_target_extra 00:12:54.401 ************************************ 00:12:54.401 15:17:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:54.660 * Looking for test storage... 00:12:54.660 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.660 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.661 --rc genhtml_branch_coverage=1 00:12:54.661 --rc genhtml_function_coverage=1 00:12:54.661 --rc genhtml_legend=1 00:12:54.661 --rc geninfo_all_blocks=1 00:12:54.661 --rc geninfo_unexecuted_blocks=1 00:12:54.661 00:12:54.661 ' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.661 --rc genhtml_branch_coverage=1 00:12:54.661 --rc genhtml_function_coverage=1 00:12:54.661 --rc genhtml_legend=1 00:12:54.661 --rc geninfo_all_blocks=1 00:12:54.661 --rc geninfo_unexecuted_blocks=1 00:12:54.661 00:12:54.661 ' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.661 --rc genhtml_branch_coverage=1 00:12:54.661 --rc genhtml_function_coverage=1 00:12:54.661 --rc genhtml_legend=1 00:12:54.661 --rc geninfo_all_blocks=1 00:12:54.661 --rc geninfo_unexecuted_blocks=1 00:12:54.661 00:12:54.661 ' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.661 --rc genhtml_branch_coverage=1 00:12:54.661 --rc genhtml_function_coverage=1 00:12:54.661 --rc genhtml_legend=1 00:12:54.661 --rc geninfo_all_blocks=1 00:12:54.661 --rc geninfo_unexecuted_blocks=1 00:12:54.661 00:12:54.661 ' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.661 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.661 ************************************ 00:12:54.661 START TEST nvmf_example 00:12:54.661 ************************************ 00:12:54.661 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:54.920 * Looking for test storage... 00:12:54.920 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.920 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.920 --rc genhtml_branch_coverage=1 00:12:54.920 --rc genhtml_function_coverage=1 00:12:54.920 --rc genhtml_legend=1 00:12:54.921 --rc geninfo_all_blocks=1 00:12:54.921 --rc geninfo_unexecuted_blocks=1 00:12:54.921 00:12:54.921 ' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.921 --rc genhtml_branch_coverage=1 00:12:54.921 --rc genhtml_function_coverage=1 00:12:54.921 --rc genhtml_legend=1 00:12:54.921 --rc geninfo_all_blocks=1 00:12:54.921 --rc geninfo_unexecuted_blocks=1 00:12:54.921 00:12:54.921 ' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.921 --rc genhtml_branch_coverage=1 00:12:54.921 --rc genhtml_function_coverage=1 00:12:54.921 --rc genhtml_legend=1 00:12:54.921 --rc geninfo_all_blocks=1 00:12:54.921 --rc geninfo_unexecuted_blocks=1 00:12:54.921 00:12:54.921 ' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.921 --rc genhtml_branch_coverage=1 00:12:54.921 --rc genhtml_function_coverage=1 00:12:54.921 --rc genhtml_legend=1 00:12:54.921 --rc geninfo_all_blocks=1 00:12:54.921 --rc geninfo_unexecuted_blocks=1 00:12:54.921 00:12:54.921 ' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.921 15:17:22 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:01.521 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:01.522 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:01.522 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:01.522 Found net devices under 0000:18:00.0: mlx_0_0 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:01.522 Found net devices under 0000:18:00.1: mlx_0_1 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:01.522 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:01.782 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.782 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:01.782 altname enp24s0f0np0 00:13:01.782 altname ens785f0np0 00:13:01.782 inet 192.168.100.8/24 scope global mlx_0_0 00:13:01.782 valid_lft forever preferred_lft forever 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:01.782 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.782 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:01.782 altname enp24s0f1np1 00:13:01.782 altname ens785f1np1 00:13:01.782 inet 192.168.100.9/24 scope global mlx_0_1 00:13:01.782 valid_lft forever preferred_lft forever 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:01.782 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:01.783 192.168.100.9' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:01.783 192.168.100.9' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:01.783 192.168.100.9' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3033012 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3033012 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3033012 ']' 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.783 15:17:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.718 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.976 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.976 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:02.976 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.976 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:03.235 15:17:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:15.579 Initializing NVMe Controllers 00:13:15.579 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.579 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.579 Initialization complete. Launching workers. 00:13:15.579 ======================================================== 00:13:15.579 Latency(us) 00:13:15.579 Device Information : IOPS MiB/s Average min max 00:13:15.579 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 22285.80 87.05 2871.50 762.47 20048.07 00:13:15.579 ======================================================== 00:13:15.579 Total : 22285.80 87.05 2871.50 762.47 20048.07 00:13:15.579 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:15.579 rmmod nvme_rdma 00:13:15.579 rmmod nvme_fabrics 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3033012 ']' 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3033012 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3033012 ']' 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3033012 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3033012 00:13:15.579 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:13:15.580 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:13:15.580 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3033012' 00:13:15.580 killing process with pid 3033012 00:13:15.580 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3033012 00:13:15.580 15:17:42 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3033012 00:13:16.516 nvmf threads initialize successfully 00:13:16.516 bdev subsystem init successfully 00:13:16.516 created a nvmf target service 00:13:16.516 create targets's poll groups done 00:13:16.516 all subsystems of target started 00:13:16.516 nvmf target is running 00:13:16.516 all subsystems of target stopped 00:13:16.516 destroy targets's poll groups done 00:13:16.516 destroyed the nvmf target service 00:13:16.516 bdev subsystem finish successfully 00:13:16.516 nvmf threads destroy successfully 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:16.516 00:13:16.516 real 0m21.905s 00:13:16.516 user 0m58.249s 00:13:16.516 sys 0m6.017s 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.516 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:16.516 ************************************ 00:13:16.516 END TEST nvmf_example 00:13:16.516 ************************************ 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.775 ************************************ 00:13:16.775 START TEST nvmf_filesystem 00:13:16.775 ************************************ 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:13:16.775 * Looking for test storage... 00:13:16.775 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.775 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.776 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.038 --rc genhtml_branch_coverage=1 00:13:17.038 --rc genhtml_function_coverage=1 00:13:17.038 --rc genhtml_legend=1 00:13:17.038 --rc geninfo_all_blocks=1 00:13:17.038 --rc geninfo_unexecuted_blocks=1 00:13:17.038 00:13:17.038 ' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.038 --rc genhtml_branch_coverage=1 00:13:17.038 --rc genhtml_function_coverage=1 00:13:17.038 --rc genhtml_legend=1 00:13:17.038 --rc geninfo_all_blocks=1 00:13:17.038 --rc geninfo_unexecuted_blocks=1 00:13:17.038 00:13:17.038 ' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.038 --rc genhtml_branch_coverage=1 00:13:17.038 --rc genhtml_function_coverage=1 00:13:17.038 --rc genhtml_legend=1 00:13:17.038 --rc geninfo_all_blocks=1 00:13:17.038 --rc geninfo_unexecuted_blocks=1 00:13:17.038 00:13:17.038 ' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.038 --rc genhtml_branch_coverage=1 00:13:17.038 --rc genhtml_function_coverage=1 00:13:17.038 --rc genhtml_legend=1 00:13:17.038 --rc geninfo_all_blocks=1 00:13:17.038 --rc geninfo_unexecuted_blocks=1 00:13:17.038 00:13:17.038 ' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:17.038 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:13:17.039 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:17.039 #define SPDK_CONFIG_H 00:13:17.039 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:17.039 #define SPDK_CONFIG_APPS 1 00:13:17.039 #define SPDK_CONFIG_ARCH native 00:13:17.039 #define SPDK_CONFIG_ASAN 1 00:13:17.039 #undef SPDK_CONFIG_AVAHI 00:13:17.039 #undef SPDK_CONFIG_CET 00:13:17.039 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:17.039 #define SPDK_CONFIG_COVERAGE 1 00:13:17.039 #define SPDK_CONFIG_CROSS_PREFIX 00:13:17.039 #undef SPDK_CONFIG_CRYPTO 00:13:17.039 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:17.039 #undef SPDK_CONFIG_CUSTOMOCF 00:13:17.039 #undef SPDK_CONFIG_DAOS 00:13:17.039 #define SPDK_CONFIG_DAOS_DIR 00:13:17.039 #define SPDK_CONFIG_DEBUG 1 00:13:17.039 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:17.039 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:13:17.039 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:17.039 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:17.039 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:17.039 #undef SPDK_CONFIG_DPDK_UADK 00:13:17.039 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:13:17.039 #define SPDK_CONFIG_EXAMPLES 1 00:13:17.039 #undef SPDK_CONFIG_FC 00:13:17.039 #define SPDK_CONFIG_FC_PATH 00:13:17.039 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:17.039 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:17.039 #define SPDK_CONFIG_FSDEV 1 00:13:17.039 #undef SPDK_CONFIG_FUSE 00:13:17.039 #undef SPDK_CONFIG_FUZZER 00:13:17.039 #define SPDK_CONFIG_FUZZER_LIB 00:13:17.039 #undef SPDK_CONFIG_GOLANG 00:13:17.039 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:17.039 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:17.039 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:17.039 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:17.039 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:17.039 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:17.039 #undef SPDK_CONFIG_HAVE_LZ4 00:13:17.039 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:17.039 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:17.039 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:17.039 #define SPDK_CONFIG_IDXD 1 00:13:17.039 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:17.039 #undef SPDK_CONFIG_IPSEC_MB 00:13:17.039 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:17.039 #define SPDK_CONFIG_ISAL 1 00:13:17.039 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:17.039 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:17.039 #define SPDK_CONFIG_LIBDIR 00:13:17.039 #undef SPDK_CONFIG_LTO 00:13:17.039 #define SPDK_CONFIG_MAX_LCORES 128 00:13:17.039 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:17.039 #define SPDK_CONFIG_NVME_CUSE 1 00:13:17.039 #undef SPDK_CONFIG_OCF 00:13:17.039 #define SPDK_CONFIG_OCF_PATH 00:13:17.039 #define SPDK_CONFIG_OPENSSL_PATH 00:13:17.039 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:17.039 #define SPDK_CONFIG_PGO_DIR 00:13:17.039 #undef SPDK_CONFIG_PGO_USE 00:13:17.039 #define SPDK_CONFIG_PREFIX /usr/local 00:13:17.039 #undef SPDK_CONFIG_RAID5F 00:13:17.039 #undef SPDK_CONFIG_RBD 00:13:17.039 #define SPDK_CONFIG_RDMA 1 00:13:17.039 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:17.039 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:17.040 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:17.040 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:17.040 #define SPDK_CONFIG_SHARED 1 00:13:17.040 #undef SPDK_CONFIG_SMA 00:13:17.040 #define SPDK_CONFIG_TESTS 1 00:13:17.040 #undef SPDK_CONFIG_TSAN 00:13:17.040 #define SPDK_CONFIG_UBLK 1 00:13:17.040 #define SPDK_CONFIG_UBSAN 1 00:13:17.040 #undef SPDK_CONFIG_UNIT_TESTS 00:13:17.040 #undef SPDK_CONFIG_URING 00:13:17.040 #define SPDK_CONFIG_URING_PATH 00:13:17.040 #undef SPDK_CONFIG_URING_ZNS 00:13:17.040 #undef SPDK_CONFIG_USDT 00:13:17.040 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:17.040 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:17.040 #undef SPDK_CONFIG_VFIO_USER 00:13:17.040 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:17.040 #define SPDK_CONFIG_VHOST 1 00:13:17.040 #define SPDK_CONFIG_VIRTIO 1 00:13:17.040 #undef SPDK_CONFIG_VTUNE 00:13:17.040 #define SPDK_CONFIG_VTUNE_DIR 00:13:17.040 #define SPDK_CONFIG_WERROR 1 00:13:17.040 #define SPDK_CONFIG_WPDK_DIR 00:13:17.040 #undef SPDK_CONFIG_XNVME 00:13:17.040 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:17.040 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:17.041 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3035098 ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3035098 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.0MYD2d 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0MYD2d/tests/target /tmp/spdk.0MYD2d 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:17.042 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=84081983488 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500311040 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10418327552 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=47235358720 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250153472 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=14794752 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=18877227008 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22835200 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=46175973376 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250157568 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074184192 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:17.043 * Looking for test storage... 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=84081983488 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12632920064 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.043 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.303 --rc genhtml_branch_coverage=1 00:13:17.303 --rc genhtml_function_coverage=1 00:13:17.303 --rc genhtml_legend=1 00:13:17.303 --rc geninfo_all_blocks=1 00:13:17.303 --rc geninfo_unexecuted_blocks=1 00:13:17.303 00:13:17.303 ' 00:13:17.303 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.304 --rc genhtml_branch_coverage=1 00:13:17.304 --rc genhtml_function_coverage=1 00:13:17.304 --rc genhtml_legend=1 00:13:17.304 --rc geninfo_all_blocks=1 00:13:17.304 --rc geninfo_unexecuted_blocks=1 00:13:17.304 00:13:17.304 ' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.304 --rc genhtml_branch_coverage=1 00:13:17.304 --rc genhtml_function_coverage=1 00:13:17.304 --rc genhtml_legend=1 00:13:17.304 --rc geninfo_all_blocks=1 00:13:17.304 --rc geninfo_unexecuted_blocks=1 00:13:17.304 00:13:17.304 ' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.304 --rc genhtml_branch_coverage=1 00:13:17.304 --rc genhtml_function_coverage=1 00:13:17.304 --rc genhtml_legend=1 00:13:17.304 --rc geninfo_all_blocks=1 00:13:17.304 --rc geninfo_unexecuted_blocks=1 00:13:17.304 00:13:17.304 ' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.304 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.304 15:17:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:23.873 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:23.873 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:23.873 Found net devices under 0000:18:00.0: mlx_0_0 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:23.873 Found net devices under 0000:18:00.1: mlx_0_1 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:23.873 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:24.133 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:24.134 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.134 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:24.134 altname enp24s0f0np0 00:13:24.134 altname ens785f0np0 00:13:24.134 inet 192.168.100.8/24 scope global mlx_0_0 00:13:24.134 valid_lft forever preferred_lft forever 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:24.134 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.134 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:24.134 altname enp24s0f1np1 00:13:24.134 altname ens785f1np1 00:13:24.134 inet 192.168.100.9/24 scope global mlx_0_1 00:13:24.134 valid_lft forever preferred_lft forever 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:24.134 192.168.100.9' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:24.134 192.168.100.9' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:24.134 192.168.100.9' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.134 ************************************ 00:13:24.134 START TEST nvmf_filesystem_no_in_capsule 00:13:24.134 ************************************ 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.134 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3038004 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3038004 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3038004 ']' 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.135 15:17:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.393 [2024-11-06 15:17:51.853928] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:24.393 [2024-11-06 15:17:51.854051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.393 [2024-11-06 15:17:52.002700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.652 [2024-11-06 15:17:52.113241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.652 [2024-11-06 15:17:52.113297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.652 [2024-11-06 15:17:52.113310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.652 [2024-11-06 15:17:52.113323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.652 [2024-11-06 15:17:52.113333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.652 [2024-11-06 15:17:52.115626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.652 [2024-11-06 15:17:52.115711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.652 [2024-11-06 15:17:52.115778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.652 [2024-11-06 15:17:52.115803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.218 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.218 [2024-11-06 15:17:52.717919] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:25.218 [2024-11-06 15:17:52.741275] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f6b22f48940) succeed. 00:13:25.218 [2024-11-06 15:17:52.750876] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f6b22f04940) succeed. 00:13:25.477 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.477 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:25.477 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.477 15:17:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 Malloc1 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 [2024-11-06 15:17:53.471339] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:26.044 { 00:13:26.044 "name": "Malloc1", 00:13:26.044 "aliases": [ 00:13:26.044 "a973b585-8f0f-4839-8a63-9ff184a99567" 00:13:26.044 ], 00:13:26.044 "product_name": "Malloc disk", 00:13:26.044 "block_size": 512, 00:13:26.044 "num_blocks": 1048576, 00:13:26.044 "uuid": "a973b585-8f0f-4839-8a63-9ff184a99567", 00:13:26.044 "assigned_rate_limits": { 00:13:26.044 "rw_ios_per_sec": 0, 00:13:26.044 "rw_mbytes_per_sec": 0, 00:13:26.044 "r_mbytes_per_sec": 0, 00:13:26.044 "w_mbytes_per_sec": 0 00:13:26.044 }, 00:13:26.044 "claimed": true, 00:13:26.044 "claim_type": "exclusive_write", 00:13:26.044 "zoned": false, 00:13:26.044 "supported_io_types": { 00:13:26.044 "read": true, 00:13:26.044 "write": true, 00:13:26.044 "unmap": true, 00:13:26.044 "flush": true, 00:13:26.044 "reset": true, 00:13:26.044 "nvme_admin": false, 00:13:26.044 "nvme_io": false, 00:13:26.044 "nvme_io_md": false, 00:13:26.044 "write_zeroes": true, 00:13:26.044 "zcopy": true, 00:13:26.044 "get_zone_info": false, 00:13:26.044 "zone_management": false, 00:13:26.044 "zone_append": false, 00:13:26.044 "compare": false, 00:13:26.044 "compare_and_write": false, 00:13:26.044 "abort": true, 00:13:26.044 "seek_hole": false, 00:13:26.044 "seek_data": false, 00:13:26.044 "copy": true, 00:13:26.044 "nvme_iov_md": false 00:13:26.044 }, 00:13:26.044 "memory_domains": [ 00:13:26.044 { 00:13:26.044 "dma_device_id": "system", 00:13:26.044 "dma_device_type": 1 00:13:26.044 }, 00:13:26.044 { 00:13:26.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.044 "dma_device_type": 2 00:13:26.044 } 00:13:26.044 ], 00:13:26.044 "driver_specific": {} 00:13:26.044 } 00:13:26.044 ]' 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:26.044 15:17:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:26.977 15:17:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.977 15:17:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:26.977 15:17:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.977 15:17:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:26.977 15:17:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:29.509 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:29.510 15:17:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.446 ************************************ 00:13:30.446 START TEST filesystem_ext4 00:13:30.446 ************************************ 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:30.446 mke2fs 1.47.0 (5-Feb-2023) 00:13:30.446 Discarding device blocks: 0/522240 done 00:13:30.446 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:30.446 Filesystem UUID: 7ff10a7a-22f1-4d73-b6ca-4027dc0d6df7 00:13:30.446 Superblock backups stored on blocks: 00:13:30.446 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:30.446 00:13:30.446 Allocating group tables: 0/64 done 00:13:30.446 Writing inode tables: 0/64 done 00:13:30.446 Creating journal (8192 blocks): done 00:13:30.446 Writing superblocks and filesystem accounting information: 0/64 done 00:13:30.446 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.446 15:17:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3038004 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.446 00:13:30.446 real 0m0.223s 00:13:30.446 user 0m0.027s 00:13:30.446 sys 0m0.081s 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.446 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:30.446 ************************************ 00:13:30.446 END TEST filesystem_ext4 00:13:30.446 ************************************ 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.705 ************************************ 00:13:30.705 START TEST filesystem_btrfs 00:13:30.705 ************************************ 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:30.705 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:30.706 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:30.706 btrfs-progs v6.8.1 00:13:30.706 See https://btrfs.readthedocs.io for more information. 00:13:30.706 00:13:30.706 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:30.706 NOTE: several default settings have changed in version 5.15, please make sure 00:13:30.706 this does not affect your deployments: 00:13:30.706 - DUP for metadata (-m dup) 00:13:30.706 - enabled no-holes (-O no-holes) 00:13:30.706 - enabled free-space-tree (-R free-space-tree) 00:13:30.706 00:13:30.706 Label: (null) 00:13:30.706 UUID: 8f4ea930-9e8e-48da-941c-1f79c446ff6c 00:13:30.706 Node size: 16384 00:13:30.706 Sector size: 4096 (CPU page size: 4096) 00:13:30.706 Filesystem size: 510.00MiB 00:13:30.706 Block group profiles: 00:13:30.706 Data: single 8.00MiB 00:13:30.706 Metadata: DUP 32.00MiB 00:13:30.706 System: DUP 8.00MiB 00:13:30.706 SSD detected: yes 00:13:30.706 Zoned device: no 00:13:30.706 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:30.706 Checksum: crc32c 00:13:30.706 Number of devices: 1 00:13:30.706 Devices: 00:13:30.706 ID SIZE PATH 00:13:30.706 1 510.00MiB /dev/nvme0n1p1 00:13:30.706 00:13:30.706 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:30.706 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3038004 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.965 00:13:30.965 real 0m0.269s 00:13:30.965 user 0m0.027s 00:13:30.965 sys 0m0.128s 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.965 ************************************ 00:13:30.965 END TEST filesystem_btrfs 00:13:30.965 ************************************ 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.965 ************************************ 00:13:30.965 START TEST filesystem_xfs 00:13:30.965 ************************************ 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:30.965 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:30.965 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:30.965 = sectsz=512 attr=2, projid32bit=1 00:13:30.965 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:30.965 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:30.965 data = bsize=4096 blocks=130560, imaxpct=25 00:13:30.965 = sunit=0 swidth=0 blks 00:13:30.965 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:30.965 log =internal log bsize=4096 blocks=16384, version=2 00:13:30.965 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:30.965 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:31.224 Discarding blocks...Done. 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3038004 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:31.224 00:13:31.224 real 0m0.216s 00:13:31.224 user 0m0.027s 00:13:31.224 sys 0m0.079s 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:31.224 ************************************ 00:13:31.224 END TEST filesystem_xfs 00:13:31.224 ************************************ 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:31.224 15:17:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.161 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3038004 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3038004 ']' 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3038004 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3038004 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3038004' 00:13:32.419 killing process with pid 3038004 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3038004 00:13:32.419 15:17:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3038004 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:35.700 00:13:35.700 real 0m10.942s 00:13:35.700 user 0m40.938s 00:13:35.700 sys 0m1.489s 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.700 ************************************ 00:13:35.700 END TEST nvmf_filesystem_no_in_capsule 00:13:35.700 ************************************ 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.700 ************************************ 00:13:35.700 START TEST nvmf_filesystem_in_capsule 00:13:35.700 ************************************ 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3039798 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3039798 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3039798 ']' 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:35.700 15:18:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.700 [2024-11-06 15:18:02.885920] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:35.700 [2024-11-06 15:18:02.886044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.700 [2024-11-06 15:18:03.019469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.700 [2024-11-06 15:18:03.128879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.700 [2024-11-06 15:18:03.128938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.700 [2024-11-06 15:18:03.128966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.700 [2024-11-06 15:18:03.128980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.700 [2024-11-06 15:18:03.128989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.700 [2024-11-06 15:18:03.131187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.700 [2024-11-06 15:18:03.131333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.700 [2024-11-06 15:18:03.131263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.700 [2024-11-06 15:18:03.131361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.267 15:18:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.267 [2024-11-06 15:18:03.787742] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fe7f4fb3940) succeed. 00:13:36.267 [2024-11-06 15:18:03.797245] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fe7f4f6f940) succeed. 00:13:36.525 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.525 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:36.525 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.525 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 Malloc1 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 [2024-11-06 15:18:04.627307] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:37.092 { 00:13:37.092 "name": "Malloc1", 00:13:37.092 "aliases": [ 00:13:37.092 "4faa8388-a036-4385-b81c-4e60cbb27669" 00:13:37.092 ], 00:13:37.092 "product_name": "Malloc disk", 00:13:37.092 "block_size": 512, 00:13:37.092 "num_blocks": 1048576, 00:13:37.092 "uuid": "4faa8388-a036-4385-b81c-4e60cbb27669", 00:13:37.092 "assigned_rate_limits": { 00:13:37.092 "rw_ios_per_sec": 0, 00:13:37.092 "rw_mbytes_per_sec": 0, 00:13:37.092 "r_mbytes_per_sec": 0, 00:13:37.092 "w_mbytes_per_sec": 0 00:13:37.092 }, 00:13:37.092 "claimed": true, 00:13:37.092 "claim_type": "exclusive_write", 00:13:37.092 "zoned": false, 00:13:37.092 "supported_io_types": { 00:13:37.092 "read": true, 00:13:37.092 "write": true, 00:13:37.092 "unmap": true, 00:13:37.092 "flush": true, 00:13:37.092 "reset": true, 00:13:37.092 "nvme_admin": false, 00:13:37.092 "nvme_io": false, 00:13:37.092 "nvme_io_md": false, 00:13:37.092 "write_zeroes": true, 00:13:37.092 "zcopy": true, 00:13:37.092 "get_zone_info": false, 00:13:37.092 "zone_management": false, 00:13:37.092 "zone_append": false, 00:13:37.092 "compare": false, 00:13:37.092 "compare_and_write": false, 00:13:37.092 "abort": true, 00:13:37.092 "seek_hole": false, 00:13:37.092 "seek_data": false, 00:13:37.092 "copy": true, 00:13:37.092 "nvme_iov_md": false 00:13:37.092 }, 00:13:37.092 "memory_domains": [ 00:13:37.092 { 00:13:37.092 "dma_device_id": "system", 00:13:37.092 "dma_device_type": 1 00:13:37.092 }, 00:13:37.092 { 00:13:37.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.092 "dma_device_type": 2 00:13:37.092 } 00:13:37.092 ], 00:13:37.092 "driver_specific": {} 00:13:37.092 } 00:13:37.092 ]' 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:37.092 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:37.350 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:37.350 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:37.350 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:37.350 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:37.350 15:18:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:38.283 15:18:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.283 15:18:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:38.283 15:18:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.283 15:18:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:38.283 15:18:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:40.180 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:40.180 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:40.180 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.180 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:40.180 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:40.181 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:40.439 15:18:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.372 ************************************ 00:13:41.372 START TEST filesystem_in_capsule_ext4 00:13:41.372 ************************************ 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:41.372 15:18:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:41.372 mke2fs 1.47.0 (5-Feb-2023) 00:13:41.631 Discarding device blocks: 0/522240 done 00:13:41.631 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:41.631 Filesystem UUID: df57fc88-f074-49b3-b5bb-8f9fba8311f1 00:13:41.631 Superblock backups stored on blocks: 00:13:41.631 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:41.631 00:13:41.631 Allocating group tables: 0/64 done 00:13:41.631 Writing inode tables: 0/64 done 00:13:41.631 Creating journal (8192 blocks): done 00:13:41.631 Writing superblocks and filesystem accounting information: 0/64 done 00:13:41.631 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3039798 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.631 00:13:41.631 real 0m0.218s 00:13:41.631 user 0m0.035s 00:13:41.631 sys 0m0.068s 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:41.631 ************************************ 00:13:41.631 END TEST filesystem_in_capsule_ext4 00:13:41.631 ************************************ 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.631 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.889 ************************************ 00:13:41.889 START TEST filesystem_in_capsule_btrfs 00:13:41.889 ************************************ 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:41.889 btrfs-progs v6.8.1 00:13:41.889 See https://btrfs.readthedocs.io for more information. 00:13:41.889 00:13:41.889 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:41.889 NOTE: several default settings have changed in version 5.15, please make sure 00:13:41.889 this does not affect your deployments: 00:13:41.889 - DUP for metadata (-m dup) 00:13:41.889 - enabled no-holes (-O no-holes) 00:13:41.889 - enabled free-space-tree (-R free-space-tree) 00:13:41.889 00:13:41.889 Label: (null) 00:13:41.889 UUID: 17166d64-c1fd-48a7-9a3e-b2203b69ca1f 00:13:41.889 Node size: 16384 00:13:41.889 Sector size: 4096 (CPU page size: 4096) 00:13:41.889 Filesystem size: 510.00MiB 00:13:41.889 Block group profiles: 00:13:41.889 Data: single 8.00MiB 00:13:41.889 Metadata: DUP 32.00MiB 00:13:41.889 System: DUP 8.00MiB 00:13:41.889 SSD detected: yes 00:13:41.889 Zoned device: no 00:13:41.889 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:41.889 Checksum: crc32c 00:13:41.889 Number of devices: 1 00:13:41.889 Devices: 00:13:41.889 ID SIZE PATH 00:13:41.889 1 510.00MiB /dev/nvme0n1p1 00:13:41.889 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:41.889 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3039798 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:42.147 00:13:42.147 real 0m0.272s 00:13:42.147 user 0m0.029s 00:13:42.147 sys 0m0.131s 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:42.147 ************************************ 00:13:42.147 END TEST filesystem_in_capsule_btrfs 00:13:42.147 ************************************ 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.147 ************************************ 00:13:42.147 START TEST filesystem_in_capsule_xfs 00:13:42.147 ************************************ 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:42.147 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:42.147 = sectsz=512 attr=2, projid32bit=1 00:13:42.147 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:42.147 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:42.147 data = bsize=4096 blocks=130560, imaxpct=25 00:13:42.147 = sunit=0 swidth=0 blks 00:13:42.147 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:42.147 log =internal log bsize=4096 blocks=16384, version=2 00:13:42.147 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:42.147 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:42.147 Discarding blocks...Done. 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:42.147 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3039798 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:42.404 00:13:42.404 real 0m0.230s 00:13:42.404 user 0m0.033s 00:13:42.404 sys 0m0.074s 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.404 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:42.404 ************************************ 00:13:42.405 END TEST filesystem_in_capsule_xfs 00:13:42.405 ************************************ 00:13:42.405 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:42.405 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:42.405 15:18:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3039798 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3039798 ']' 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3039798 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:43.337 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:43.595 15:18:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3039798 00:13:43.595 15:18:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:43.595 15:18:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:43.595 15:18:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3039798' 00:13:43.595 killing process with pid 3039798 00:13:43.595 15:18:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3039798 00:13:43.595 15:18:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3039798 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:46.877 00:13:46.877 real 0m11.413s 00:13:46.877 user 0m42.334s 00:13:46.877 sys 0m1.532s 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:46.877 ************************************ 00:13:46.877 END TEST nvmf_filesystem_in_capsule 00:13:46.877 ************************************ 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:46.877 rmmod nvme_rdma 00:13:46.877 rmmod nvme_fabrics 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:46.877 00:13:46.877 real 0m30.079s 00:13:46.877 user 1m25.631s 00:13:46.877 sys 0m8.637s 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:46.877 ************************************ 00:13:46.877 END TEST nvmf_filesystem 00:13:46.877 ************************************ 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.877 ************************************ 00:13:46.877 START TEST nvmf_target_discovery 00:13:46.877 ************************************ 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:46.877 * Looking for test storage... 00:13:46.877 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:46.877 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.136 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.137 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.137 15:18:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.261 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:55.262 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:55.262 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:55.262 Found net devices under 0000:18:00.0: mlx_0_0 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:55.262 Found net devices under 0000:18:00.1: mlx_0_1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:55.262 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:55.262 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:55.262 altname enp24s0f0np0 00:13:55.262 altname ens785f0np0 00:13:55.262 inet 192.168.100.8/24 scope global mlx_0_0 00:13:55.262 valid_lft forever preferred_lft forever 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:55.262 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:55.263 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:55.263 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:55.263 altname enp24s0f1np1 00:13:55.263 altname ens785f1np1 00:13:55.263 inet 192.168.100.9/24 scope global mlx_0_1 00:13:55.263 valid_lft forever preferred_lft forever 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:55.263 192.168.100.9' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:55.263 192.168.100.9' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:55.263 192.168.100.9' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3044740 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3044740 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3044740 ']' 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:55.263 15:18:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.263 [2024-11-06 15:18:21.713991] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:55.263 [2024-11-06 15:18:21.714105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.263 [2024-11-06 15:18:21.865353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.263 [2024-11-06 15:18:21.976752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.263 [2024-11-06 15:18:21.976807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.263 [2024-11-06 15:18:21.976819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.263 [2024-11-06 15:18:21.976850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.263 [2024-11-06 15:18:21.976859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.263 [2024-11-06 15:18:21.979100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.263 [2024-11-06 15:18:21.979205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.263 [2024-11-06 15:18:21.979239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.263 [2024-11-06 15:18:21.979278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.263 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.263 [2024-11-06 15:18:22.605011] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5610fa6940) succeed. 00:13:55.263 [2024-11-06 15:18:22.614560] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5610f62940) succeed. 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 Null1 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 [2024-11-06 15:18:22.943527] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 Null2 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 Null3 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.523 15:18:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 Null4 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.524 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:13:55.783 00:13:55.783 Discovery Log Number of Records 6, Generation counter 6 00:13:55.783 =====Discovery Log Entry 0====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: current discovery subsystem 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4420 00:13:55.783 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: explicit discovery connections, duplicate discovery information 00:13:55.783 rdma_prtype: not specified 00:13:55.783 rdma_qptype: connected 00:13:55.783 rdma_cms: rdma-cm 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 =====Discovery Log Entry 1====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: nvme subsystem 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4420 00:13:55.783 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: none 00:13:55.783 rdma_prtype: not specified 00:13:55.783 rdma_qptype: connected 00:13:55.783 rdma_cms: rdma-cm 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 =====Discovery Log Entry 2====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: nvme subsystem 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4420 00:13:55.783 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: none 00:13:55.783 rdma_prtype: not specified 00:13:55.783 rdma_qptype: connected 00:13:55.783 rdma_cms: rdma-cm 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 =====Discovery Log Entry 3====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: nvme subsystem 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4420 00:13:55.783 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: none 00:13:55.783 rdma_prtype: not specified 00:13:55.783 rdma_qptype: connected 00:13:55.783 rdma_cms: rdma-cm 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 =====Discovery Log Entry 4====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: nvme subsystem 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4420 00:13:55.783 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: none 00:13:55.783 rdma_prtype: not specified 00:13:55.783 rdma_qptype: connected 00:13:55.783 rdma_cms: rdma-cm 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 =====Discovery Log Entry 5====== 00:13:55.783 trtype: rdma 00:13:55.783 adrfam: ipv4 00:13:55.783 subtype: discovery subsystem referral 00:13:55.783 treq: not required 00:13:55.783 portid: 0 00:13:55.783 trsvcid: 4430 00:13:55.783 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:55.783 traddr: 192.168.100.8 00:13:55.783 eflags: none 00:13:55.783 rdma_prtype: unrecognized 00:13:55.783 rdma_qptype: unrecognized 00:13:55.783 rdma_cms: unrecognized 00:13:55.783 rdma_pkey: 0x0000 00:13:55.783 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:55.783 Perform nvmf subsystem discovery via RPC 00:13:55.783 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:55.783 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.783 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.783 [ 00:13:55.783 { 00:13:55.783 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:55.783 "subtype": "Discovery", 00:13:55.783 "listen_addresses": [ 00:13:55.783 { 00:13:55.783 "trtype": "RDMA", 00:13:55.783 "adrfam": "IPv4", 00:13:55.783 "traddr": "192.168.100.8", 00:13:55.783 "trsvcid": "4420" 00:13:55.783 } 00:13:55.783 ], 00:13:55.783 "allow_any_host": true, 00:13:55.783 "hosts": [] 00:13:55.783 }, 00:13:55.783 { 00:13:55.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.783 "subtype": "NVMe", 00:13:55.783 "listen_addresses": [ 00:13:55.783 { 00:13:55.783 "trtype": "RDMA", 00:13:55.783 "adrfam": "IPv4", 00:13:55.783 "traddr": "192.168.100.8", 00:13:55.783 "trsvcid": "4420" 00:13:55.783 } 00:13:55.783 ], 00:13:55.783 "allow_any_host": true, 00:13:55.783 "hosts": [], 00:13:55.783 "serial_number": "SPDK00000000000001", 00:13:55.783 "model_number": "SPDK bdev Controller", 00:13:55.783 "max_namespaces": 32, 00:13:55.783 "min_cntlid": 1, 00:13:55.783 "max_cntlid": 65519, 00:13:55.783 "namespaces": [ 00:13:55.783 { 00:13:55.783 "nsid": 1, 00:13:55.783 "bdev_name": "Null1", 00:13:55.783 "name": "Null1", 00:13:55.783 "nguid": "16302ACBE38C4BED8E76819F169FE2EF", 00:13:55.783 "uuid": "16302acb-e38c-4bed-8e76-819f169fe2ef" 00:13:55.783 } 00:13:55.783 ] 00:13:55.783 }, 00:13:55.783 { 00:13:55.783 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:55.783 "subtype": "NVMe", 00:13:55.783 "listen_addresses": [ 00:13:55.783 { 00:13:55.783 "trtype": "RDMA", 00:13:55.783 "adrfam": "IPv4", 00:13:55.783 "traddr": "192.168.100.8", 00:13:55.783 "trsvcid": "4420" 00:13:55.783 } 00:13:55.783 ], 00:13:55.783 "allow_any_host": true, 00:13:55.783 "hosts": [], 00:13:55.783 "serial_number": "SPDK00000000000002", 00:13:55.783 "model_number": "SPDK bdev Controller", 00:13:55.783 "max_namespaces": 32, 00:13:55.783 "min_cntlid": 1, 00:13:55.783 "max_cntlid": 65519, 00:13:55.783 "namespaces": [ 00:13:55.783 { 00:13:55.783 "nsid": 1, 00:13:55.783 "bdev_name": "Null2", 00:13:55.783 "name": "Null2", 00:13:55.783 "nguid": "1B1F484C842C4B4D9C38E8D408B9DF14", 00:13:55.783 "uuid": "1b1f484c-842c-4b4d-9c38-e8d408b9df14" 00:13:55.783 } 00:13:55.784 ] 00:13:55.784 }, 00:13:55.784 { 00:13:55.784 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:55.784 "subtype": "NVMe", 00:13:55.784 "listen_addresses": [ 00:13:55.784 { 00:13:55.784 "trtype": "RDMA", 00:13:55.784 "adrfam": "IPv4", 00:13:55.784 "traddr": "192.168.100.8", 00:13:55.784 "trsvcid": "4420" 00:13:55.784 } 00:13:55.784 ], 00:13:55.784 "allow_any_host": true, 00:13:55.784 "hosts": [], 00:13:55.784 "serial_number": "SPDK00000000000003", 00:13:55.784 "model_number": "SPDK bdev Controller", 00:13:55.784 "max_namespaces": 32, 00:13:55.784 "min_cntlid": 1, 00:13:55.784 "max_cntlid": 65519, 00:13:55.784 "namespaces": [ 00:13:55.784 { 00:13:55.784 "nsid": 1, 00:13:55.784 "bdev_name": "Null3", 00:13:55.784 "name": "Null3", 00:13:55.784 "nguid": "C094C14BE55942719943F9ACEA61805A", 00:13:55.784 "uuid": "c094c14b-e559-4271-9943-f9acea61805a" 00:13:55.784 } 00:13:55.784 ] 00:13:55.784 }, 00:13:55.784 { 00:13:55.784 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:55.784 "subtype": "NVMe", 00:13:55.784 "listen_addresses": [ 00:13:55.784 { 00:13:55.784 "trtype": "RDMA", 00:13:55.784 "adrfam": "IPv4", 00:13:55.784 "traddr": "192.168.100.8", 00:13:55.784 "trsvcid": "4420" 00:13:55.784 } 00:13:55.784 ], 00:13:55.784 "allow_any_host": true, 00:13:55.784 "hosts": [], 00:13:55.784 "serial_number": "SPDK00000000000004", 00:13:55.784 "model_number": "SPDK bdev Controller", 00:13:55.784 "max_namespaces": 32, 00:13:55.784 "min_cntlid": 1, 00:13:55.784 "max_cntlid": 65519, 00:13:55.784 "namespaces": [ 00:13:55.784 { 00:13:55.784 "nsid": 1, 00:13:55.784 "bdev_name": "Null4", 00:13:55.784 "name": "Null4", 00:13:55.784 "nguid": "23FDDF96BEA9407C8A0EF9D67528969B", 00:13:55.784 "uuid": "23fddf96-bea9-407c-8a0e-f9d67528969b" 00:13:55.784 } 00:13:55.784 ] 00:13:55.784 } 00:13:55.784 ] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:55.784 rmmod nvme_rdma 00:13:55.784 rmmod nvme_fabrics 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3044740 ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3044740 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3044740 ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3044740 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.784 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3044740 00:13:56.043 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:56.043 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:56.043 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3044740' 00:13:56.043 killing process with pid 3044740 00:13:56.043 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3044740 00:13:56.043 15:18:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3044740 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:57.949 00:13:57.949 real 0m10.797s 00:13:57.949 user 0m13.056s 00:13:57.949 sys 0m6.100s 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.949 ************************************ 00:13:57.949 END TEST nvmf_target_discovery 00:13:57.949 ************************************ 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.949 ************************************ 00:13:57.949 START TEST nvmf_referrals 00:13:57.949 ************************************ 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:57.949 * Looking for test storage... 00:13:57.949 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.949 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.950 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.950 15:18:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:06.077 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:06.078 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:06.078 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:06.078 Found net devices under 0000:18:00.0: mlx_0_0 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:06.078 Found net devices under 0000:18:00.1: mlx_0_1 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:06.078 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:06.079 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:06.079 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:06.079 altname enp24s0f0np0 00:14:06.079 altname ens785f0np0 00:14:06.079 inet 192.168.100.8/24 scope global mlx_0_0 00:14:06.079 valid_lft forever preferred_lft forever 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:06.079 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:06.079 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:06.079 altname enp24s0f1np1 00:14:06.079 altname ens785f1np1 00:14:06.079 inet 192.168.100.9/24 scope global mlx_0_1 00:14:06.079 valid_lft forever preferred_lft forever 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:06.079 192.168.100.9' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:06.079 192.168.100.9' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:06.079 192.168.100.9' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3048210 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3048210 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3048210 ']' 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.079 15:18:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.079 [2024-11-06 15:18:32.551331] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:06.079 [2024-11-06 15:18:32.551445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.079 [2024-11-06 15:18:32.702678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.079 [2024-11-06 15:18:32.811391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.079 [2024-11-06 15:18:32.811445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.079 [2024-11-06 15:18:32.811458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.079 [2024-11-06 15:18:32.811471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.079 [2024-11-06 15:18:32.811481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.079 [2024-11-06 15:18:32.813819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.079 [2024-11-06 15:18:32.813902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.080 [2024-11-06 15:18:32.813969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.080 [2024-11-06 15:18:32.814003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.080 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.080 [2024-11-06 15:18:33.448391] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fb652b3e940) succeed. 00:14:06.080 [2024-11-06 15:18:33.457899] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fb6521bd940) succeed. 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 [2024-11-06 15:18:33.743512] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:06.338 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:06.339 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.339 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.339 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:06.339 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.339 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.598 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:06.599 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:06.599 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:06.857 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:07.116 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:07.375 15:18:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.375 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.633 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:07.634 rmmod nvme_rdma 00:14:07.634 rmmod nvme_fabrics 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3048210 ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3048210 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3048210 ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3048210 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3048210 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3048210' 00:14:07.634 killing process with pid 3048210 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3048210 00:14:07.634 15:18:35 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3048210 00:14:09.537 15:18:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.537 15:18:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:09.537 00:14:09.537 real 0m11.698s 00:14:09.537 user 0m17.646s 00:14:09.537 sys 0m6.452s 00:14:09.537 15:18:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.537 15:18:36 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.537 ************************************ 00:14:09.537 END TEST nvmf_referrals 00:14:09.537 ************************************ 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.537 ************************************ 00:14:09.537 START TEST nvmf_connect_disconnect 00:14:09.537 ************************************ 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:14:09.537 * Looking for test storage... 00:14:09.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:09.537 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:14:09.538 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.798 --rc genhtml_branch_coverage=1 00:14:09.798 --rc genhtml_function_coverage=1 00:14:09.798 --rc genhtml_legend=1 00:14:09.798 --rc geninfo_all_blocks=1 00:14:09.798 --rc geninfo_unexecuted_blocks=1 00:14:09.798 00:14:09.798 ' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.798 --rc genhtml_branch_coverage=1 00:14:09.798 --rc genhtml_function_coverage=1 00:14:09.798 --rc genhtml_legend=1 00:14:09.798 --rc geninfo_all_blocks=1 00:14:09.798 --rc geninfo_unexecuted_blocks=1 00:14:09.798 00:14:09.798 ' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.798 --rc genhtml_branch_coverage=1 00:14:09.798 --rc genhtml_function_coverage=1 00:14:09.798 --rc genhtml_legend=1 00:14:09.798 --rc geninfo_all_blocks=1 00:14:09.798 --rc geninfo_unexecuted_blocks=1 00:14:09.798 00:14:09.798 ' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.798 --rc genhtml_branch_coverage=1 00:14:09.798 --rc genhtml_function_coverage=1 00:14:09.798 --rc genhtml_legend=1 00:14:09.798 --rc geninfo_all_blocks=1 00:14:09.798 --rc geninfo_unexecuted_blocks=1 00:14:09.798 00:14:09.798 ' 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:09.798 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.799 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.799 15:18:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:16.475 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:16.476 15:18:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:16.476 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:16.476 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:16.476 Found net devices under 0000:18:00.0: mlx_0_0 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:16.476 Found net devices under 0000:18:00.1: mlx_0_1 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:14:16.476 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:16.477 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:16.738 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.738 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:16.738 altname enp24s0f0np0 00:14:16.738 altname ens785f0np0 00:14:16.738 inet 192.168.100.8/24 scope global mlx_0_0 00:14:16.738 valid_lft forever preferred_lft forever 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:16.738 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.738 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:16.738 altname enp24s0f1np1 00:14:16.738 altname ens785f1np1 00:14:16.738 inet 192.168.100.9/24 scope global mlx_0_1 00:14:16.738 valid_lft forever preferred_lft forever 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:16.738 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:16.739 192.168.100.9' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:16.739 192.168.100.9' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:16.739 192.168.100.9' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3051774 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3051774 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3051774 ']' 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:16.739 15:18:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 [2024-11-06 15:18:44.368705] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:16.739 [2024-11-06 15:18:44.368816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.999 [2024-11-06 15:18:44.518619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.999 [2024-11-06 15:18:44.627510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.999 [2024-11-06 15:18:44.627567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.999 [2024-11-06 15:18:44.627595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.999 [2024-11-06 15:18:44.627610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.999 [2024-11-06 15:18:44.627619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.999 [2024-11-06 15:18:44.629967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.999 [2024-11-06 15:18:44.630059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.999 [2024-11-06 15:18:44.630122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.999 [2024-11-06 15:18:44.630179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.567 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:17.567 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:14:17.567 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.567 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.567 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.825 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.825 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:14:17.825 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.825 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:17.825 [2024-11-06 15:18:45.222637] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:14:17.825 [2024-11-06 15:18:45.246043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f26edb84940) succeed. 00:14:17.826 [2024-11-06 15:18:45.255638] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f26edb40940) succeed. 00:14:17.826 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.826 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:17.826 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.826 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 [2024-11-06 15:18:45.519564] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:18.085 15:18:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:21.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:17.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:33.494 rmmod nvme_rdma 00:19:33.494 rmmod nvme_fabrics 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3051774 ']' 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3051774 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3051774 ']' 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3051774 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.494 15:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3051774 00:19:33.494 15:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.494 15:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.494 15:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3051774' 00:19:33.494 killing process with pid 3051774 00:19:33.494 15:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3051774 00:19:33.494 15:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3051774 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:35.397 00:19:35.397 real 5m25.565s 00:19:35.397 user 21m6.452s 00:19:35.397 sys 0m18.750s 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:35.397 ************************************ 00:19:35.397 END TEST nvmf_connect_disconnect 00:19:35.397 ************************************ 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.397 ************************************ 00:19:35.397 START TEST nvmf_multitarget 00:19:35.397 ************************************ 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:35.397 * Looking for test storage... 00:19:35.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.397 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:35.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.397 --rc genhtml_branch_coverage=1 00:19:35.397 --rc genhtml_function_coverage=1 00:19:35.397 --rc genhtml_legend=1 00:19:35.398 --rc geninfo_all_blocks=1 00:19:35.398 --rc geninfo_unexecuted_blocks=1 00:19:35.398 00:19:35.398 ' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:35.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.398 --rc genhtml_branch_coverage=1 00:19:35.398 --rc genhtml_function_coverage=1 00:19:35.398 --rc genhtml_legend=1 00:19:35.398 --rc geninfo_all_blocks=1 00:19:35.398 --rc geninfo_unexecuted_blocks=1 00:19:35.398 00:19:35.398 ' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:35.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.398 --rc genhtml_branch_coverage=1 00:19:35.398 --rc genhtml_function_coverage=1 00:19:35.398 --rc genhtml_legend=1 00:19:35.398 --rc geninfo_all_blocks=1 00:19:35.398 --rc geninfo_unexecuted_blocks=1 00:19:35.398 00:19:35.398 ' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:35.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.398 --rc genhtml_branch_coverage=1 00:19:35.398 --rc genhtml_function_coverage=1 00:19:35.398 --rc genhtml_legend=1 00:19:35.398 --rc geninfo_all_blocks=1 00:19:35.398 --rc geninfo_unexecuted_blocks=1 00:19:35.398 00:19:35.398 ' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.398 15:24:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:43.523 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:43.523 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:43.523 Found net devices under 0000:18:00.0: mlx_0_0 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:43.523 Found net devices under 0000:18:00.1: mlx_0_1 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:43.523 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:43.524 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.524 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:43.524 altname enp24s0f0np0 00:19:43.524 altname ens785f0np0 00:19:43.524 inet 192.168.100.8/24 scope global mlx_0_0 00:19:43.524 valid_lft forever preferred_lft forever 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:43.524 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.524 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:43.524 altname enp24s0f1np1 00:19:43.524 altname ens785f1np1 00:19:43.524 inet 192.168.100.9/24 scope global mlx_0_1 00:19:43.524 valid_lft forever preferred_lft forever 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:43.524 192.168.100.9' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:43.524 192.168.100.9' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:43.524 192.168.100.9' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3098273 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3098273 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3098273 ']' 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:43.524 15:24:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:43.524 [2024-11-06 15:24:10.053037] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:43.524 [2024-11-06 15:24:10.053155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.524 [2024-11-06 15:24:10.210707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.524 [2024-11-06 15:24:10.325973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.524 [2024-11-06 15:24:10.326036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.524 [2024-11-06 15:24:10.326049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.524 [2024-11-06 15:24:10.326063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.524 [2024-11-06 15:24:10.326073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.524 [2024-11-06 15:24:10.328501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.524 [2024-11-06 15:24:10.328591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.524 [2024-11-06 15:24:10.328654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.524 [2024-11-06 15:24:10.328679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.524 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.524 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:19:43.524 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:43.525 15:24:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:43.525 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:43.525 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:43.525 "nvmf_tgt_1" 00:19:43.525 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:43.783 "nvmf_tgt_2" 00:19:43.783 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:43.783 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:43.783 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:43.783 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:44.041 true 00:19:44.041 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:44.041 true 00:19:44.041 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:44.041 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:44.301 rmmod nvme_rdma 00:19:44.301 rmmod nvme_fabrics 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3098273 ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3098273 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3098273 ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3098273 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3098273 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3098273' 00:19:44.301 killing process with pid 3098273 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3098273 00:19:44.301 15:24:11 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3098273 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:45.678 00:19:45.678 real 0m10.234s 00:19:45.678 user 0m12.813s 00:19:45.678 sys 0m6.046s 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:45.678 ************************************ 00:19:45.678 END TEST nvmf_multitarget 00:19:45.678 ************************************ 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:45.678 15:24:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.678 ************************************ 00:19:45.678 START TEST nvmf_rpc 00:19:45.678 ************************************ 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:45.678 * Looking for test storage... 00:19:45.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.678 --rc genhtml_branch_coverage=1 00:19:45.678 --rc genhtml_function_coverage=1 00:19:45.678 --rc genhtml_legend=1 00:19:45.678 --rc geninfo_all_blocks=1 00:19:45.678 --rc geninfo_unexecuted_blocks=1 00:19:45.678 00:19:45.678 ' 00:19:45.678 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.678 --rc genhtml_branch_coverage=1 00:19:45.678 --rc genhtml_function_coverage=1 00:19:45.678 --rc genhtml_legend=1 00:19:45.678 --rc geninfo_all_blocks=1 00:19:45.678 --rc geninfo_unexecuted_blocks=1 00:19:45.678 00:19:45.678 ' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:45.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.679 --rc genhtml_branch_coverage=1 00:19:45.679 --rc genhtml_function_coverage=1 00:19:45.679 --rc genhtml_legend=1 00:19:45.679 --rc geninfo_all_blocks=1 00:19:45.679 --rc geninfo_unexecuted_blocks=1 00:19:45.679 00:19:45.679 ' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:45.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.679 --rc genhtml_branch_coverage=1 00:19:45.679 --rc genhtml_function_coverage=1 00:19:45.679 --rc genhtml_legend=1 00:19:45.679 --rc geninfo_all_blocks=1 00:19:45.679 --rc geninfo_unexecuted_blocks=1 00:19:45.679 00:19:45.679 ' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.679 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.679 15:24:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:52.380 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:52.380 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:52.380 Found net devices under 0000:18:00.0: mlx_0_0 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:52.380 Found net devices under 0000:18:00.1: mlx_0_1 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:52.380 15:24:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.380 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:52.380 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.380 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.380 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.380 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.381 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:52.641 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.641 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:52.641 altname enp24s0f0np0 00:19:52.641 altname ens785f0np0 00:19:52.641 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.641 valid_lft forever preferred_lft forever 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:52.641 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.641 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:52.641 altname enp24s0f1np1 00:19:52.641 altname ens785f1np1 00:19:52.641 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.641 valid_lft forever preferred_lft forever 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.641 192.168.100.9' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:52.641 192.168.100.9' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:52.641 192.168.100.9' 00:19:52.641 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3101730 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3101730 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3101730 ']' 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.642 15:24:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.642 [2024-11-06 15:24:20.271955] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:52.642 [2024-11-06 15:24:20.272080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.901 [2024-11-06 15:24:20.425636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.160 [2024-11-06 15:24:20.538580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.160 [2024-11-06 15:24:20.538634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.160 [2024-11-06 15:24:20.538647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.160 [2024-11-06 15:24:20.538662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.160 [2024-11-06 15:24:20.538672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.160 [2024-11-06 15:24:20.541155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.160 [2024-11-06 15:24:20.541221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.160 [2024-11-06 15:24:20.541238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.160 [2024-11-06 15:24:20.541268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:19:53.729 "tick_rate": 2300000000, 00:19:53.729 "poll_groups": [ 00:19:53.729 { 00:19:53.729 "name": "nvmf_tgt_poll_group_000", 00:19:53.729 "admin_qpairs": 0, 00:19:53.729 "io_qpairs": 0, 00:19:53.729 "current_admin_qpairs": 0, 00:19:53.729 "current_io_qpairs": 0, 00:19:53.729 "pending_bdev_io": 0, 00:19:53.729 "completed_nvme_io": 0, 00:19:53.729 "transports": [] 00:19:53.729 }, 00:19:53.729 { 00:19:53.729 "name": "nvmf_tgt_poll_group_001", 00:19:53.729 "admin_qpairs": 0, 00:19:53.729 "io_qpairs": 0, 00:19:53.729 "current_admin_qpairs": 0, 00:19:53.729 "current_io_qpairs": 0, 00:19:53.729 "pending_bdev_io": 0, 00:19:53.729 "completed_nvme_io": 0, 00:19:53.729 "transports": [] 00:19:53.729 }, 00:19:53.729 { 00:19:53.729 "name": "nvmf_tgt_poll_group_002", 00:19:53.729 "admin_qpairs": 0, 00:19:53.729 "io_qpairs": 0, 00:19:53.729 "current_admin_qpairs": 0, 00:19:53.729 "current_io_qpairs": 0, 00:19:53.729 "pending_bdev_io": 0, 00:19:53.729 "completed_nvme_io": 0, 00:19:53.729 "transports": [] 00:19:53.729 }, 00:19:53.729 { 00:19:53.729 "name": "nvmf_tgt_poll_group_003", 00:19:53.729 "admin_qpairs": 0, 00:19:53.729 "io_qpairs": 0, 00:19:53.729 "current_admin_qpairs": 0, 00:19:53.729 "current_io_qpairs": 0, 00:19:53.729 "pending_bdev_io": 0, 00:19:53.729 "completed_nvme_io": 0, 00:19:53.729 "transports": [] 00:19:53.729 } 00:19:53.729 ] 00:19:53.729 }' 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.729 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.729 [2024-11-06 15:24:21.282241] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f273db9a940) succeed. 00:19:53.729 [2024-11-06 15:24:21.291804] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f273db56940) succeed. 00:19:53.988 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.988 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:53.988 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.988 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.247 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.247 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:19:54.247 "tick_rate": 2300000000, 00:19:54.247 "poll_groups": [ 00:19:54.247 { 00:19:54.247 "name": "nvmf_tgt_poll_group_000", 00:19:54.247 "admin_qpairs": 0, 00:19:54.247 "io_qpairs": 0, 00:19:54.247 "current_admin_qpairs": 0, 00:19:54.247 "current_io_qpairs": 0, 00:19:54.247 "pending_bdev_io": 0, 00:19:54.247 "completed_nvme_io": 0, 00:19:54.247 "transports": [ 00:19:54.247 { 00:19:54.247 "trtype": "RDMA", 00:19:54.247 "pending_data_buffer": 0, 00:19:54.247 "devices": [ 00:19:54.247 { 00:19:54.247 "name": "mlx5_0", 00:19:54.247 "polls": 31323, 00:19:54.247 "idle_polls": 31323, 00:19:54.247 "completions": 0, 00:19:54.247 "requests": 0, 00:19:54.247 "request_latency": 0, 00:19:54.247 "pending_free_request": 0, 00:19:54.247 "pending_rdma_read": 0, 00:19:54.247 "pending_rdma_write": 0, 00:19:54.247 "pending_rdma_send": 0, 00:19:54.247 "total_send_wrs": 0, 00:19:54.247 "send_doorbell_updates": 0, 00:19:54.247 "total_recv_wrs": 4096, 00:19:54.247 "recv_doorbell_updates": 1 00:19:54.247 }, 00:19:54.247 { 00:19:54.247 "name": "mlx5_1", 00:19:54.247 "polls": 31323, 00:19:54.247 "idle_polls": 31323, 00:19:54.247 "completions": 0, 00:19:54.247 "requests": 0, 00:19:54.247 "request_latency": 0, 00:19:54.247 "pending_free_request": 0, 00:19:54.247 "pending_rdma_read": 0, 00:19:54.247 "pending_rdma_write": 0, 00:19:54.247 "pending_rdma_send": 0, 00:19:54.247 "total_send_wrs": 0, 00:19:54.247 "send_doorbell_updates": 0, 00:19:54.247 "total_recv_wrs": 4096, 00:19:54.247 "recv_doorbell_updates": 1 00:19:54.247 } 00:19:54.247 ] 00:19:54.247 } 00:19:54.247 ] 00:19:54.247 }, 00:19:54.247 { 00:19:54.247 "name": "nvmf_tgt_poll_group_001", 00:19:54.247 "admin_qpairs": 0, 00:19:54.247 "io_qpairs": 0, 00:19:54.247 "current_admin_qpairs": 0, 00:19:54.247 "current_io_qpairs": 0, 00:19:54.247 "pending_bdev_io": 0, 00:19:54.247 "completed_nvme_io": 0, 00:19:54.247 "transports": [ 00:19:54.247 { 00:19:54.247 "trtype": "RDMA", 00:19:54.247 "pending_data_buffer": 0, 00:19:54.247 "devices": [ 00:19:54.247 { 00:19:54.247 "name": "mlx5_0", 00:19:54.247 "polls": 20563, 00:19:54.247 "idle_polls": 20563, 00:19:54.247 "completions": 0, 00:19:54.247 "requests": 0, 00:19:54.247 "request_latency": 0, 00:19:54.247 "pending_free_request": 0, 00:19:54.247 "pending_rdma_read": 0, 00:19:54.247 "pending_rdma_write": 0, 00:19:54.247 "pending_rdma_send": 0, 00:19:54.247 "total_send_wrs": 0, 00:19:54.247 "send_doorbell_updates": 0, 00:19:54.247 "total_recv_wrs": 4096, 00:19:54.247 "recv_doorbell_updates": 1 00:19:54.247 }, 00:19:54.247 { 00:19:54.247 "name": "mlx5_1", 00:19:54.247 "polls": 20563, 00:19:54.247 "idle_polls": 20563, 00:19:54.247 "completions": 0, 00:19:54.247 "requests": 0, 00:19:54.247 "request_latency": 0, 00:19:54.247 "pending_free_request": 0, 00:19:54.247 "pending_rdma_read": 0, 00:19:54.247 "pending_rdma_write": 0, 00:19:54.248 "pending_rdma_send": 0, 00:19:54.248 "total_send_wrs": 0, 00:19:54.248 "send_doorbell_updates": 0, 00:19:54.248 "total_recv_wrs": 4096, 00:19:54.248 "recv_doorbell_updates": 1 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "name": "nvmf_tgt_poll_group_002", 00:19:54.248 "admin_qpairs": 0, 00:19:54.248 "io_qpairs": 0, 00:19:54.248 "current_admin_qpairs": 0, 00:19:54.248 "current_io_qpairs": 0, 00:19:54.248 "pending_bdev_io": 0, 00:19:54.248 "completed_nvme_io": 0, 00:19:54.248 "transports": [ 00:19:54.248 { 00:19:54.248 "trtype": "RDMA", 00:19:54.248 "pending_data_buffer": 0, 00:19:54.248 "devices": [ 00:19:54.248 { 00:19:54.248 "name": "mlx5_0", 00:19:54.248 "polls": 10577, 00:19:54.248 "idle_polls": 10577, 00:19:54.248 "completions": 0, 00:19:54.248 "requests": 0, 00:19:54.248 "request_latency": 0, 00:19:54.248 "pending_free_request": 0, 00:19:54.248 "pending_rdma_read": 0, 00:19:54.248 "pending_rdma_write": 0, 00:19:54.248 "pending_rdma_send": 0, 00:19:54.248 "total_send_wrs": 0, 00:19:54.248 "send_doorbell_updates": 0, 00:19:54.248 "total_recv_wrs": 4096, 00:19:54.248 "recv_doorbell_updates": 1 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "name": "mlx5_1", 00:19:54.248 "polls": 10577, 00:19:54.248 "idle_polls": 10577, 00:19:54.248 "completions": 0, 00:19:54.248 "requests": 0, 00:19:54.248 "request_latency": 0, 00:19:54.248 "pending_free_request": 0, 00:19:54.248 "pending_rdma_read": 0, 00:19:54.248 "pending_rdma_write": 0, 00:19:54.248 "pending_rdma_send": 0, 00:19:54.248 "total_send_wrs": 0, 00:19:54.248 "send_doorbell_updates": 0, 00:19:54.248 "total_recv_wrs": 4096, 00:19:54.248 "recv_doorbell_updates": 1 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "name": "nvmf_tgt_poll_group_003", 00:19:54.248 "admin_qpairs": 0, 00:19:54.248 "io_qpairs": 0, 00:19:54.248 "current_admin_qpairs": 0, 00:19:54.248 "current_io_qpairs": 0, 00:19:54.248 "pending_bdev_io": 0, 00:19:54.248 "completed_nvme_io": 0, 00:19:54.248 "transports": [ 00:19:54.248 { 00:19:54.248 "trtype": "RDMA", 00:19:54.248 "pending_data_buffer": 0, 00:19:54.248 "devices": [ 00:19:54.248 { 00:19:54.248 "name": "mlx5_0", 00:19:54.248 "polls": 760, 00:19:54.248 "idle_polls": 760, 00:19:54.248 "completions": 0, 00:19:54.248 "requests": 0, 00:19:54.248 "request_latency": 0, 00:19:54.248 "pending_free_request": 0, 00:19:54.248 "pending_rdma_read": 0, 00:19:54.248 "pending_rdma_write": 0, 00:19:54.248 "pending_rdma_send": 0, 00:19:54.248 "total_send_wrs": 0, 00:19:54.248 "send_doorbell_updates": 0, 00:19:54.248 "total_recv_wrs": 4096, 00:19:54.248 "recv_doorbell_updates": 1 00:19:54.248 }, 00:19:54.248 { 00:19:54.248 "name": "mlx5_1", 00:19:54.248 "polls": 760, 00:19:54.248 "idle_polls": 760, 00:19:54.248 "completions": 0, 00:19:54.248 "requests": 0, 00:19:54.248 "request_latency": 0, 00:19:54.248 "pending_free_request": 0, 00:19:54.248 "pending_rdma_read": 0, 00:19:54.248 "pending_rdma_write": 0, 00:19:54.248 "pending_rdma_send": 0, 00:19:54.248 "total_send_wrs": 0, 00:19:54.248 "send_doorbell_updates": 0, 00:19:54.248 "total_recv_wrs": 4096, 00:19:54.248 "recv_doorbell_updates": 1 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 } 00:19:54.248 ] 00:19:54.248 }' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.248 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.508 Malloc1 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.508 [2024-11-06 15:24:21.960061] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:19:54.508 15:24:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:19:54.508 [2024-11-06 15:24:22.006166] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:19:54.508 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:54.508 could not add new controller: failed to write to nvme-fabrics device 00:19:54.508 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.509 15:24:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:55.446 15:24:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:55.446 15:24:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:19:55.446 15:24:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:55.446 15:24:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:55.446 15:24:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:19:57.983 15:24:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:58.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:19:58.551 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:58.552 [2024-11-06 15:24:26.138412] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:19:58.552 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:58.552 could not add new controller: failed to write to nvme-fabrics device 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.552 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:58.811 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.811 15:24:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:59.750 15:24:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:59.750 15:24:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:19:59.750 15:24:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:59.750 15:24:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:59.750 15:24:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:01.656 15:24:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:02.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.594 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.853 [2024-11-06 15:24:30.242797] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.853 15:24:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:03.791 15:24:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:03.791 15:24:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:20:03.791 15:24:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:03.791 15:24:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:03.791 15:24:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:05.695 15:24:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.634 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 [2024-11-06 15:24:34.288640] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.893 15:24:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:07.830 15:24:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:07.830 15:24:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:20:07.830 15:24:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:07.830 15:24:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:07.830 15:24:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:09.736 15:24:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:10.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:10.674 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 [2024-11-06 15:24:38.381539] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.933 15:24:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:11.871 15:24:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:11.871 15:24:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:20:11.871 15:24:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:11.871 15:24:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:11.871 15:24:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:13.776 15:24:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:15.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 [2024-11-06 15:24:42.428149] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.154 15:24:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:16.092 15:24:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:16.092 15:24:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:20:16.092 15:24:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:16.092 15:24:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:16.092 15:24:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:17.997 15:24:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:18.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 [2024-11-06 15:24:46.479366] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.934 15:24:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:19.871 15:24:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:19.871 15:24:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:20:19.871 15:24:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:19.871 15:24:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:19.871 15:24:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:20:22.406 15:24:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:22.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 [2024-11-06 15:24:50.540375] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 [2024-11-06 15:24:50.592631] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:22.974 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 [2024-11-06 15:24:50.644783] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 [2024-11-06 15:24:50.696979] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 [2024-11-06 15:24:50.749167] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.235 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:20:23.235 "tick_rate": 2300000000, 00:20:23.235 "poll_groups": [ 00:20:23.235 { 00:20:23.235 "name": "nvmf_tgt_poll_group_000", 00:20:23.235 "admin_qpairs": 2, 00:20:23.235 "io_qpairs": 27, 00:20:23.235 "current_admin_qpairs": 0, 00:20:23.235 "current_io_qpairs": 0, 00:20:23.235 "pending_bdev_io": 0, 00:20:23.235 "completed_nvme_io": 135, 00:20:23.235 "transports": [ 00:20:23.235 { 00:20:23.235 "trtype": "RDMA", 00:20:23.235 "pending_data_buffer": 0, 00:20:23.235 "devices": [ 00:20:23.235 { 00:20:23.235 "name": "mlx5_0", 00:20:23.235 "polls": 3216297, 00:20:23.235 "idle_polls": 3215953, 00:20:23.235 "completions": 385, 00:20:23.235 "requests": 192, 00:20:23.235 "request_latency": 45785758, 00:20:23.235 "pending_free_request": 0, 00:20:23.235 "pending_rdma_read": 0, 00:20:23.235 "pending_rdma_write": 0, 00:20:23.235 "pending_rdma_send": 0, 00:20:23.235 "total_send_wrs": 327, 00:20:23.235 "send_doorbell_updates": 171, 00:20:23.235 "total_recv_wrs": 4288, 00:20:23.235 "recv_doorbell_updates": 171 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "mlx5_1", 00:20:23.236 "polls": 3216297, 00:20:23.236 "idle_polls": 3216297, 00:20:23.236 "completions": 0, 00:20:23.236 "requests": 0, 00:20:23.236 "request_latency": 0, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 0, 00:20:23.236 "send_doorbell_updates": 0, 00:20:23.236 "total_recv_wrs": 4096, 00:20:23.236 "recv_doorbell_updates": 1 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "nvmf_tgt_poll_group_001", 00:20:23.236 "admin_qpairs": 2, 00:20:23.236 "io_qpairs": 26, 00:20:23.236 "current_admin_qpairs": 0, 00:20:23.236 "current_io_qpairs": 0, 00:20:23.236 "pending_bdev_io": 0, 00:20:23.236 "completed_nvme_io": 215, 00:20:23.236 "transports": [ 00:20:23.236 { 00:20:23.236 "trtype": "RDMA", 00:20:23.236 "pending_data_buffer": 0, 00:20:23.236 "devices": [ 00:20:23.236 { 00:20:23.236 "name": "mlx5_0", 00:20:23.236 "polls": 3230293, 00:20:23.236 "idle_polls": 3229833, 00:20:23.236 "completions": 542, 00:20:23.236 "requests": 271, 00:20:23.236 "request_latency": 77386382, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 486, 00:20:23.236 "send_doorbell_updates": 221, 00:20:23.236 "total_recv_wrs": 4367, 00:20:23.236 "recv_doorbell_updates": 222 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "mlx5_1", 00:20:23.236 "polls": 3230293, 00:20:23.236 "idle_polls": 3230293, 00:20:23.236 "completions": 0, 00:20:23.236 "requests": 0, 00:20:23.236 "request_latency": 0, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 0, 00:20:23.236 "send_doorbell_updates": 0, 00:20:23.236 "total_recv_wrs": 4096, 00:20:23.236 "recv_doorbell_updates": 1 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "nvmf_tgt_poll_group_002", 00:20:23.236 "admin_qpairs": 1, 00:20:23.236 "io_qpairs": 26, 00:20:23.236 "current_admin_qpairs": 0, 00:20:23.236 "current_io_qpairs": 0, 00:20:23.236 "pending_bdev_io": 0, 00:20:23.236 "completed_nvme_io": 28, 00:20:23.236 "transports": [ 00:20:23.236 { 00:20:23.236 "trtype": "RDMA", 00:20:23.236 "pending_data_buffer": 0, 00:20:23.236 "devices": [ 00:20:23.236 { 00:20:23.236 "name": "mlx5_0", 00:20:23.236 "polls": 3329957, 00:20:23.236 "idle_polls": 3329846, 00:20:23.236 "completions": 111, 00:20:23.236 "requests": 55, 00:20:23.236 "request_latency": 9312754, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 70, 00:20:23.236 "send_doorbell_updates": 56, 00:20:23.236 "total_recv_wrs": 4151, 00:20:23.236 "recv_doorbell_updates": 56 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "mlx5_1", 00:20:23.236 "polls": 3329957, 00:20:23.236 "idle_polls": 3329957, 00:20:23.236 "completions": 0, 00:20:23.236 "requests": 0, 00:20:23.236 "request_latency": 0, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 0, 00:20:23.236 "send_doorbell_updates": 0, 00:20:23.236 "total_recv_wrs": 4096, 00:20:23.236 "recv_doorbell_updates": 1 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "nvmf_tgt_poll_group_003", 00:20:23.236 "admin_qpairs": 2, 00:20:23.236 "io_qpairs": 26, 00:20:23.236 "current_admin_qpairs": 0, 00:20:23.236 "current_io_qpairs": 0, 00:20:23.236 "pending_bdev_io": 0, 00:20:23.236 "completed_nvme_io": 77, 00:20:23.236 "transports": [ 00:20:23.236 { 00:20:23.236 "trtype": "RDMA", 00:20:23.236 "pending_data_buffer": 0, 00:20:23.236 "devices": [ 00:20:23.236 { 00:20:23.236 "name": "mlx5_0", 00:20:23.236 "polls": 2508669, 00:20:23.236 "idle_polls": 2508433, 00:20:23.236 "completions": 262, 00:20:23.236 "requests": 131, 00:20:23.236 "request_latency": 29986870, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 207, 00:20:23.236 "send_doorbell_updates": 119, 00:20:23.236 "total_recv_wrs": 4227, 00:20:23.236 "recv_doorbell_updates": 120 00:20:23.236 }, 00:20:23.236 { 00:20:23.236 "name": "mlx5_1", 00:20:23.236 "polls": 2508669, 00:20:23.236 "idle_polls": 2508669, 00:20:23.236 "completions": 0, 00:20:23.236 "requests": 0, 00:20:23.236 "request_latency": 0, 00:20:23.236 "pending_free_request": 0, 00:20:23.236 "pending_rdma_read": 0, 00:20:23.236 "pending_rdma_write": 0, 00:20:23.236 "pending_rdma_send": 0, 00:20:23.236 "total_send_wrs": 0, 00:20:23.236 "send_doorbell_updates": 0, 00:20:23.236 "total_recv_wrs": 4096, 00:20:23.236 "recv_doorbell_updates": 1 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 } 00:20:23.236 ] 00:20:23.236 }' 00:20:23.236 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:23.236 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:23.236 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:23.236 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:20:23.496 15:24:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 162471764 > 0 )) 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:23.496 rmmod nvme_rdma 00:20:23.496 rmmod nvme_fabrics 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3101730 ']' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3101730 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3101730 ']' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3101730 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.496 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3101730 00:20:23.755 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:23.755 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:23.755 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3101730' 00:20:23.755 killing process with pid 3101730 00:20:23.755 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3101730 00:20:23.755 15:24:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3101730 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:25.662 00:20:25.662 real 0m40.030s 00:20:25.662 user 2m9.937s 00:20:25.662 sys 0m7.328s 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:25.662 ************************************ 00:20:25.662 END TEST nvmf_rpc 00:20:25.662 ************************************ 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.662 ************************************ 00:20:25.662 START TEST nvmf_invalid 00:20:25.662 ************************************ 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:20:25.662 * Looking for test storage... 00:20:25.662 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:25.662 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:25.922 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.923 --rc genhtml_branch_coverage=1 00:20:25.923 --rc genhtml_function_coverage=1 00:20:25.923 --rc genhtml_legend=1 00:20:25.923 --rc geninfo_all_blocks=1 00:20:25.923 --rc geninfo_unexecuted_blocks=1 00:20:25.923 00:20:25.923 ' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.923 --rc genhtml_branch_coverage=1 00:20:25.923 --rc genhtml_function_coverage=1 00:20:25.923 --rc genhtml_legend=1 00:20:25.923 --rc geninfo_all_blocks=1 00:20:25.923 --rc geninfo_unexecuted_blocks=1 00:20:25.923 00:20:25.923 ' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.923 --rc genhtml_branch_coverage=1 00:20:25.923 --rc genhtml_function_coverage=1 00:20:25.923 --rc genhtml_legend=1 00:20:25.923 --rc geninfo_all_blocks=1 00:20:25.923 --rc geninfo_unexecuted_blocks=1 00:20:25.923 00:20:25.923 ' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.923 --rc genhtml_branch_coverage=1 00:20:25.923 --rc genhtml_function_coverage=1 00:20:25.923 --rc genhtml_legend=1 00:20:25.923 --rc geninfo_all_blocks=1 00:20:25.923 --rc geninfo_unexecuted_blocks=1 00:20:25.923 00:20:25.923 ' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.923 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:20:25.923 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.924 15:24:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.497 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:32.498 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:32.498 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:32.498 Found net devices under 0000:18:00.0: mlx_0_0 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:32.498 Found net devices under 0000:18:00.1: mlx_0_1 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.498 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:32.758 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.758 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:32.758 altname enp24s0f0np0 00:20:32.758 altname ens785f0np0 00:20:32.758 inet 192.168.100.8/24 scope global mlx_0_0 00:20:32.758 valid_lft forever preferred_lft forever 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:32.758 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.758 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:32.758 altname enp24s0f1np1 00:20:32.758 altname ens785f1np1 00:20:32.758 inet 192.168.100.9/24 scope global mlx_0_1 00:20:32.758 valid_lft forever preferred_lft forever 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:32.758 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:32.759 192.168.100.9' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:32.759 192.168.100.9' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:32.759 192.168.100.9' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3109037 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3109037 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3109037 ']' 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:32.759 15:25:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:33.018 [2024-11-06 15:25:00.422954] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:33.018 [2024-11-06 15:25:00.423059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.018 [2024-11-06 15:25:00.573407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.277 [2024-11-06 15:25:00.689800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.277 [2024-11-06 15:25:00.689861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.277 [2024-11-06 15:25:00.689875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.277 [2024-11-06 15:25:00.689889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.277 [2024-11-06 15:25:00.689900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.277 [2024-11-06 15:25:00.692363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.277 [2024-11-06 15:25:00.692447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.277 [2024-11-06 15:25:00.692553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.277 [2024-11-06 15:25:00.692585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:33.846 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15518 00:20:33.846 [2024-11-06 15:25:01.458911] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:34.105 { 00:20:34.105 "nqn": "nqn.2016-06.io.spdk:cnode15518", 00:20:34.105 "tgt_name": "foobar", 00:20:34.105 "method": "nvmf_create_subsystem", 00:20:34.105 "req_id": 1 00:20:34.105 } 00:20:34.105 Got JSON-RPC error response 00:20:34.105 response: 00:20:34.105 { 00:20:34.105 "code": -32603, 00:20:34.105 "message": "Unable to find target foobar" 00:20:34.105 }' 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:34.105 { 00:20:34.105 "nqn": "nqn.2016-06.io.spdk:cnode15518", 00:20:34.105 "tgt_name": "foobar", 00:20:34.105 "method": "nvmf_create_subsystem", 00:20:34.105 "req_id": 1 00:20:34.105 } 00:20:34.105 Got JSON-RPC error response 00:20:34.105 response: 00:20:34.105 { 00:20:34.105 "code": -32603, 00:20:34.105 "message": "Unable to find target foobar" 00:20:34.105 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30783 00:20:34.105 [2024-11-06 15:25:01.675712] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30783: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:34.105 { 00:20:34.105 "nqn": "nqn.2016-06.io.spdk:cnode30783", 00:20:34.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:34.105 "method": "nvmf_create_subsystem", 00:20:34.105 "req_id": 1 00:20:34.105 } 00:20:34.105 Got JSON-RPC error response 00:20:34.105 response: 00:20:34.105 { 00:20:34.105 "code": -32602, 00:20:34.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:34.105 }' 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:34.105 { 00:20:34.105 "nqn": "nqn.2016-06.io.spdk:cnode30783", 00:20:34.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:34.105 "method": "nvmf_create_subsystem", 00:20:34.105 "req_id": 1 00:20:34.105 } 00:20:34.105 Got JSON-RPC error response 00:20:34.105 response: 00:20:34.105 { 00:20:34.105 "code": -32602, 00:20:34.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:34.105 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:34.105 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23091 00:20:34.366 [2024-11-06 15:25:01.880427] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23091: invalid model number 'SPDK_Controller' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:34.366 { 00:20:34.366 "nqn": "nqn.2016-06.io.spdk:cnode23091", 00:20:34.366 "model_number": "SPDK_Controller\u001f", 00:20:34.366 "method": "nvmf_create_subsystem", 00:20:34.366 "req_id": 1 00:20:34.366 } 00:20:34.366 Got JSON-RPC error response 00:20:34.366 response: 00:20:34.366 { 00:20:34.366 "code": -32602, 00:20:34.366 "message": "Invalid MN SPDK_Controller\u001f" 00:20:34.366 }' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:34.366 { 00:20:34.366 "nqn": "nqn.2016-06.io.spdk:cnode23091", 00:20:34.366 "model_number": "SPDK_Controller\u001f", 00:20:34.366 "method": "nvmf_create_subsystem", 00:20:34.366 "req_id": 1 00:20:34.366 } 00:20:34.366 Got JSON-RPC error response 00:20:34.366 response: 00:20:34.366 { 00:20:34.366 "code": -32602, 00:20:34.366 "message": "Invalid MN SPDK_Controller\u001f" 00:20:34.366 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.366 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.367 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:34.367 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:34.626 15:25:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:20:34.626 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd^RJ@Wjk/~sl-EU/L"MR-' 00:20:34.627 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'd^RJ@Wjk/~sl-EU/L"MR-' nqn.2016-06.io.spdk:cnode16637 00:20:34.887 [2024-11-06 15:25:02.273747] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16637: invalid serial number 'd^RJ@Wjk/~sl-EU/L"MR-' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:20:34.887 { 00:20:34.887 "nqn": "nqn.2016-06.io.spdk:cnode16637", 00:20:34.887 "serial_number": "d^RJ@Wjk/~sl-EU/L\"MR-", 00:20:34.887 "method": "nvmf_create_subsystem", 00:20:34.887 "req_id": 1 00:20:34.887 } 00:20:34.887 Got JSON-RPC error response 00:20:34.887 response: 00:20:34.887 { 00:20:34.887 "code": -32602, 00:20:34.887 "message": "Invalid SN d^RJ@Wjk/~sl-EU/L\"MR-" 00:20:34.887 }' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:20:34.887 { 00:20:34.887 "nqn": "nqn.2016-06.io.spdk:cnode16637", 00:20:34.887 "serial_number": "d^RJ@Wjk/~sl-EU/L\"MR-", 00:20:34.887 "method": "nvmf_create_subsystem", 00:20:34.887 "req_id": 1 00:20:34.887 } 00:20:34.887 Got JSON-RPC error response 00:20:34.887 response: 00:20:34.887 { 00:20:34.887 "code": -32602, 00:20:34.887 "message": "Invalid SN d^RJ@Wjk/~sl-EU/L\"MR-" 00:20:34.887 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:20:34.887 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:34.888 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:35.148 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:35.149 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:20:35.149 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\fvC+'\''Sdk-/' 00:20:35.149 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\fvC+'\''Sdk-/' nqn.2016-06.io.spdk:cnode17568 00:20:35.408 [2024-11-06 15:25:02.819659] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17568: invalid model number 'zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\fvC+'Sdk-/' 00:20:35.408 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:20:35.408 { 00:20:35.408 "nqn": "nqn.2016-06.io.spdk:cnode17568", 00:20:35.408 "model_number": "zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\\fvC+'\''Sdk-/", 00:20:35.408 "method": "nvmf_create_subsystem", 00:20:35.408 "req_id": 1 00:20:35.408 } 00:20:35.408 Got JSON-RPC error response 00:20:35.408 response: 00:20:35.408 { 00:20:35.408 "code": -32602, 00:20:35.408 "message": "Invalid MN zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\\fvC+'\''Sdk-/" 00:20:35.408 }' 00:20:35.408 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:20:35.408 { 00:20:35.408 "nqn": "nqn.2016-06.io.spdk:cnode17568", 00:20:35.408 "model_number": "zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\\fvC+'Sdk-/", 00:20:35.408 "method": "nvmf_create_subsystem", 00:20:35.408 "req_id": 1 00:20:35.408 } 00:20:35.408 Got JSON-RPC error response 00:20:35.408 response: 00:20:35.408 { 00:20:35.408 "code": -32602, 00:20:35.408 "message": "Invalid MN zq~a/S1a1psn@~MU[%iTvYX*;N;P@s\\fvC+'Sdk-/" 00:20:35.408 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:35.408 15:25:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:20:35.668 [2024-11-06 15:25:03.045132] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f70053bd940) succeed. 00:20:35.668 [2024-11-06 15:25:03.054719] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f7005379940) succeed. 00:20:35.927 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:20:36.187 192.168.100.9' 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:20:36.187 [2024-11-06 15:25:03.771964] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:20:36.187 { 00:20:36.187 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:36.187 "listen_address": { 00:20:36.187 "trtype": "rdma", 00:20:36.187 "traddr": "192.168.100.8", 00:20:36.187 "trsvcid": "4421" 00:20:36.187 }, 00:20:36.187 "method": "nvmf_subsystem_remove_listener", 00:20:36.187 "req_id": 1 00:20:36.187 } 00:20:36.187 Got JSON-RPC error response 00:20:36.187 response: 00:20:36.187 { 00:20:36.187 "code": -32602, 00:20:36.187 "message": "Invalid parameters" 00:20:36.187 }' 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:20:36.187 { 00:20:36.187 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:36.187 "listen_address": { 00:20:36.187 "trtype": "rdma", 00:20:36.187 "traddr": "192.168.100.8", 00:20:36.187 "trsvcid": "4421" 00:20:36.187 }, 00:20:36.187 "method": "nvmf_subsystem_remove_listener", 00:20:36.187 "req_id": 1 00:20:36.187 } 00:20:36.187 Got JSON-RPC error response 00:20:36.187 response: 00:20:36.187 { 00:20:36.187 "code": -32602, 00:20:36.187 "message": "Invalid parameters" 00:20:36.187 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:20:36.187 15:25:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19095 -i 0 00:20:36.446 [2024-11-06 15:25:03.980715] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19095: invalid cntlid range [0-65519] 00:20:36.446 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:20:36.446 { 00:20:36.446 "nqn": "nqn.2016-06.io.spdk:cnode19095", 00:20:36.446 "min_cntlid": 0, 00:20:36.446 "method": "nvmf_create_subsystem", 00:20:36.446 "req_id": 1 00:20:36.446 } 00:20:36.446 Got JSON-RPC error response 00:20:36.446 response: 00:20:36.446 { 00:20:36.446 "code": -32602, 00:20:36.446 "message": "Invalid cntlid range [0-65519]" 00:20:36.446 }' 00:20:36.446 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:20:36.446 { 00:20:36.446 "nqn": "nqn.2016-06.io.spdk:cnode19095", 00:20:36.446 "min_cntlid": 0, 00:20:36.446 "method": "nvmf_create_subsystem", 00:20:36.447 "req_id": 1 00:20:36.447 } 00:20:36.447 Got JSON-RPC error response 00:20:36.447 response: 00:20:36.447 { 00:20:36.447 "code": -32602, 00:20:36.447 "message": "Invalid cntlid range [0-65519]" 00:20:36.447 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:36.447 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3815 -i 65520 00:20:36.706 [2024-11-06 15:25:04.181489] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3815: invalid cntlid range [65520-65519] 00:20:36.706 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:20:36.706 { 00:20:36.706 "nqn": "nqn.2016-06.io.spdk:cnode3815", 00:20:36.706 "min_cntlid": 65520, 00:20:36.706 "method": "nvmf_create_subsystem", 00:20:36.706 "req_id": 1 00:20:36.706 } 00:20:36.706 Got JSON-RPC error response 00:20:36.706 response: 00:20:36.706 { 00:20:36.706 "code": -32602, 00:20:36.706 "message": "Invalid cntlid range [65520-65519]" 00:20:36.706 }' 00:20:36.706 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:20:36.706 { 00:20:36.706 "nqn": "nqn.2016-06.io.spdk:cnode3815", 00:20:36.706 "min_cntlid": 65520, 00:20:36.706 "method": "nvmf_create_subsystem", 00:20:36.706 "req_id": 1 00:20:36.706 } 00:20:36.706 Got JSON-RPC error response 00:20:36.706 response: 00:20:36.706 { 00:20:36.706 "code": -32602, 00:20:36.706 "message": "Invalid cntlid range [65520-65519]" 00:20:36.706 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:36.706 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25383 -I 0 00:20:36.965 [2024-11-06 15:25:04.378268] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25383: invalid cntlid range [1-0] 00:20:36.965 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:20:36.965 { 00:20:36.965 "nqn": "nqn.2016-06.io.spdk:cnode25383", 00:20:36.965 "max_cntlid": 0, 00:20:36.965 "method": "nvmf_create_subsystem", 00:20:36.965 "req_id": 1 00:20:36.965 } 00:20:36.965 Got JSON-RPC error response 00:20:36.965 response: 00:20:36.965 { 00:20:36.965 "code": -32602, 00:20:36.965 "message": "Invalid cntlid range [1-0]" 00:20:36.965 }' 00:20:36.965 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:20:36.965 { 00:20:36.966 "nqn": "nqn.2016-06.io.spdk:cnode25383", 00:20:36.966 "max_cntlid": 0, 00:20:36.966 "method": "nvmf_create_subsystem", 00:20:36.966 "req_id": 1 00:20:36.966 } 00:20:36.966 Got JSON-RPC error response 00:20:36.966 response: 00:20:36.966 { 00:20:36.966 "code": -32602, 00:20:36.966 "message": "Invalid cntlid range [1-0]" 00:20:36.966 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:36.966 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10126 -I 65520 00:20:36.966 [2024-11-06 15:25:04.583022] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10126: invalid cntlid range [1-65520] 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:20:37.225 { 00:20:37.225 "nqn": "nqn.2016-06.io.spdk:cnode10126", 00:20:37.225 "max_cntlid": 65520, 00:20:37.225 "method": "nvmf_create_subsystem", 00:20:37.225 "req_id": 1 00:20:37.225 } 00:20:37.225 Got JSON-RPC error response 00:20:37.225 response: 00:20:37.225 { 00:20:37.225 "code": -32602, 00:20:37.225 "message": "Invalid cntlid range [1-65520]" 00:20:37.225 }' 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:20:37.225 { 00:20:37.225 "nqn": "nqn.2016-06.io.spdk:cnode10126", 00:20:37.225 "max_cntlid": 65520, 00:20:37.225 "method": "nvmf_create_subsystem", 00:20:37.225 "req_id": 1 00:20:37.225 } 00:20:37.225 Got JSON-RPC error response 00:20:37.225 response: 00:20:37.225 { 00:20:37.225 "code": -32602, 00:20:37.225 "message": "Invalid cntlid range [1-65520]" 00:20:37.225 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3296 -i 6 -I 5 00:20:37.225 [2024-11-06 15:25:04.775792] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3296: invalid cntlid range [6-5] 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:20:37.225 { 00:20:37.225 "nqn": "nqn.2016-06.io.spdk:cnode3296", 00:20:37.225 "min_cntlid": 6, 00:20:37.225 "max_cntlid": 5, 00:20:37.225 "method": "nvmf_create_subsystem", 00:20:37.225 "req_id": 1 00:20:37.225 } 00:20:37.225 Got JSON-RPC error response 00:20:37.225 response: 00:20:37.225 { 00:20:37.225 "code": -32602, 00:20:37.225 "message": "Invalid cntlid range [6-5]" 00:20:37.225 }' 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:20:37.225 { 00:20:37.225 "nqn": "nqn.2016-06.io.spdk:cnode3296", 00:20:37.225 "min_cntlid": 6, 00:20:37.225 "max_cntlid": 5, 00:20:37.225 "method": "nvmf_create_subsystem", 00:20:37.225 "req_id": 1 00:20:37.225 } 00:20:37.225 Got JSON-RPC error response 00:20:37.225 response: 00:20:37.225 { 00:20:37.225 "code": -32602, 00:20:37.225 "message": "Invalid cntlid range [6-5]" 00:20:37.225 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:37.225 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:20:37.485 { 00:20:37.485 "name": "foobar", 00:20:37.485 "method": "nvmf_delete_target", 00:20:37.485 "req_id": 1 00:20:37.485 } 00:20:37.485 Got JSON-RPC error response 00:20:37.485 response: 00:20:37.485 { 00:20:37.485 "code": -32602, 00:20:37.485 "message": "The specified target doesn'\''t exist, cannot delete it." 00:20:37.485 }' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:20:37.485 { 00:20:37.485 "name": "foobar", 00:20:37.485 "method": "nvmf_delete_target", 00:20:37.485 "req_id": 1 00:20:37.485 } 00:20:37.485 Got JSON-RPC error response 00:20:37.485 response: 00:20:37.485 { 00:20:37.485 "code": -32602, 00:20:37.485 "message": "The specified target doesn't exist, cannot delete it." 00:20:37.485 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:37.485 rmmod nvme_rdma 00:20:37.485 rmmod nvme_fabrics 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3109037 ']' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3109037 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3109037 ']' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3109037 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.485 15:25:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3109037 00:20:37.485 15:25:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.485 15:25:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.485 15:25:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3109037' 00:20:37.485 killing process with pid 3109037 00:20:37.485 15:25:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3109037 00:20:37.485 15:25:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3109037 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:39.564 00:20:39.564 real 0m13.614s 00:20:39.564 user 0m27.352s 00:20:39.564 sys 0m6.718s 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:39.564 ************************************ 00:20:39.564 END TEST nvmf_invalid 00:20:39.564 ************************************ 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.564 ************************************ 00:20:39.564 START TEST nvmf_connect_stress 00:20:39.564 ************************************ 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:20:39.564 * Looking for test storage... 00:20:39.564 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:20:39.564 15:25:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:39.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.564 --rc genhtml_branch_coverage=1 00:20:39.564 --rc genhtml_function_coverage=1 00:20:39.564 --rc genhtml_legend=1 00:20:39.564 --rc geninfo_all_blocks=1 00:20:39.564 --rc geninfo_unexecuted_blocks=1 00:20:39.564 00:20:39.564 ' 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:39.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.564 --rc genhtml_branch_coverage=1 00:20:39.564 --rc genhtml_function_coverage=1 00:20:39.564 --rc genhtml_legend=1 00:20:39.564 --rc geninfo_all_blocks=1 00:20:39.564 --rc geninfo_unexecuted_blocks=1 00:20:39.564 00:20:39.564 ' 00:20:39.564 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:39.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.565 --rc genhtml_branch_coverage=1 00:20:39.565 --rc genhtml_function_coverage=1 00:20:39.565 --rc genhtml_legend=1 00:20:39.565 --rc geninfo_all_blocks=1 00:20:39.565 --rc geninfo_unexecuted_blocks=1 00:20:39.565 00:20:39.565 ' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:39.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.565 --rc genhtml_branch_coverage=1 00:20:39.565 --rc genhtml_function_coverage=1 00:20:39.565 --rc genhtml_legend=1 00:20:39.565 --rc geninfo_all_blocks=1 00:20:39.565 --rc geninfo_unexecuted_blocks=1 00:20:39.565 00:20:39.565 ' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.565 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.565 15:25:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:46.142 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:46.143 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:46.143 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:46.143 Found net devices under 0000:18:00.0: mlx_0_0 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:46.143 Found net devices under 0000:18:00.1: mlx_0_1 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:46.143 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:46.404 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.404 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:46.404 altname enp24s0f0np0 00:20:46.404 altname ens785f0np0 00:20:46.404 inet 192.168.100.8/24 scope global mlx_0_0 00:20:46.404 valid_lft forever preferred_lft forever 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:46.404 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.404 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:46.404 altname enp24s0f1np1 00:20:46.404 altname ens785f1np1 00:20:46.404 inet 192.168.100.9/24 scope global mlx_0_1 00:20:46.404 valid_lft forever preferred_lft forever 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:46.404 192.168.100.9' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:46.404 192.168.100.9' 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:20:46.404 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:46.405 192.168.100.9' 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:46.405 15:25:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3113022 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3113022 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3113022 ']' 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.405 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:46.664 [2024-11-06 15:25:14.107107] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:46.664 [2024-11-06 15:25:14.107223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.664 [2024-11-06 15:25:14.260149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:46.925 [2024-11-06 15:25:14.369316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.925 [2024-11-06 15:25:14.369376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.925 [2024-11-06 15:25:14.369405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.925 [2024-11-06 15:25:14.369419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.925 [2024-11-06 15:25:14.369432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.925 [2024-11-06 15:25:14.371696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.925 [2024-11-06 15:25:14.371758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.925 [2024-11-06 15:25:14.371784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.493 15:25:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:47.493 [2024-11-06 15:25:15.002618] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f7a9833b940) succeed. 00:20:47.493 [2024-11-06 15:25:15.012234] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f7a979bd940) succeed. 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:47.753 [2024-11-06 15:25:15.235966] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:47.753 NULL1 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3113223 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.753 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.754 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:48.323 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.324 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:48.324 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.324 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.324 15:25:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:48.583 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.583 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:48.583 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:48.583 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.583 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:49.150 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.150 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:49.150 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.150 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.150 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:49.409 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.409 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:49.409 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.409 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.409 15:25:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:49.668 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.668 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:49.668 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:49.668 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.668 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:50.235 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.235 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:50.235 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.235 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.235 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:50.495 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.495 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:50.495 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.495 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.495 15:25:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:50.754 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.754 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:50.754 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:50.754 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.754 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:51.322 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.323 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:51.323 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.323 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.323 15:25:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:51.582 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.582 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:51.582 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.582 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.582 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:51.841 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.841 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:51.841 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:51.841 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.841 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:52.410 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.410 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:52.410 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.410 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.410 15:25:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:52.669 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.669 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:52.669 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.669 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.669 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:52.928 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.928 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:52.928 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:52.928 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.928 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:53.495 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.495 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:53.495 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:53.495 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.495 15:25:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:53.754 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.754 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:53.754 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:53.754 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.754 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.323 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.323 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:54.323 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:54.323 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.323 15:25:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.582 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.582 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:54.582 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:54.582 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.582 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:54.841 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.841 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:54.841 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:54.841 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.841 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.410 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.410 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:55.410 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.410 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.410 15:25:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.669 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.669 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:55.669 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.669 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.669 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:55.928 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.928 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:55.928 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:55.928 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.928 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:56.495 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.495 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:56.495 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:56.495 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.495 15:25:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:56.754 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.754 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:56.755 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:56.755 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.755 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:57.014 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.014 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:57.014 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:57.014 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.014 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:57.581 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.581 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:57.581 15:25:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:57.581 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.581 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:57.840 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.840 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:57.840 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:57.840 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.840 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:58.099 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3113223 00:20:58.358 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3113223) - No such process 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3113223 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:58.358 rmmod nvme_rdma 00:20:58.358 rmmod nvme_fabrics 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3113022 ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3113022 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3113022 ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3113022 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3113022 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3113022' 00:20:58.358 killing process with pid 3113022 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3113022 00:20:58.358 15:25:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3113022 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:00.265 00:21:00.265 real 0m20.646s 00:21:00.265 user 0m44.666s 00:21:00.265 sys 0m9.701s 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 ************************************ 00:21:00.265 END TEST nvmf_connect_stress 00:21:00.265 ************************************ 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 ************************************ 00:21:00.265 START TEST nvmf_fused_ordering 00:21:00.265 ************************************ 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:21:00.265 * Looking for test storage... 00:21:00.265 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:21:00.265 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:00.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.266 --rc genhtml_branch_coverage=1 00:21:00.266 --rc genhtml_function_coverage=1 00:21:00.266 --rc genhtml_legend=1 00:21:00.266 --rc geninfo_all_blocks=1 00:21:00.266 --rc geninfo_unexecuted_blocks=1 00:21:00.266 00:21:00.266 ' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:00.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.266 --rc genhtml_branch_coverage=1 00:21:00.266 --rc genhtml_function_coverage=1 00:21:00.266 --rc genhtml_legend=1 00:21:00.266 --rc geninfo_all_blocks=1 00:21:00.266 --rc geninfo_unexecuted_blocks=1 00:21:00.266 00:21:00.266 ' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:00.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.266 --rc genhtml_branch_coverage=1 00:21:00.266 --rc genhtml_function_coverage=1 00:21:00.266 --rc genhtml_legend=1 00:21:00.266 --rc geninfo_all_blocks=1 00:21:00.266 --rc geninfo_unexecuted_blocks=1 00:21:00.266 00:21:00.266 ' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:00.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.266 --rc genhtml_branch_coverage=1 00:21:00.266 --rc genhtml_function_coverage=1 00:21:00.266 --rc genhtml_legend=1 00:21:00.266 --rc geninfo_all_blocks=1 00:21:00.266 --rc geninfo_unexecuted_blocks=1 00:21:00.266 00:21:00.266 ' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.266 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.266 15:25:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:08.395 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:08.395 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.395 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:08.396 Found net devices under 0000:18:00.0: mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:08.396 Found net devices under 0000:18:00.1: mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:08.396 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:08.396 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:08.396 altname enp24s0f0np0 00:21:08.396 altname ens785f0np0 00:21:08.396 inet 192.168.100.8/24 scope global mlx_0_0 00:21:08.396 valid_lft forever preferred_lft forever 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:08.396 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:08.396 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:08.396 altname enp24s0f1np1 00:21:08.396 altname ens785f1np1 00:21:08.396 inet 192.168.100.9/24 scope global mlx_0_1 00:21:08.396 valid_lft forever preferred_lft forever 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:08.396 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:08.397 192.168.100.9' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:08.397 192.168.100.9' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:08.397 192.168.100.9' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3117728 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3117728 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3117728 ']' 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:08.397 15:25:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 [2024-11-06 15:25:34.863583] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:08.397 [2024-11-06 15:25:34.863692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.397 [2024-11-06 15:25:35.012142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.397 [2024-11-06 15:25:35.117241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.397 [2024-11-06 15:25:35.117303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.397 [2024-11-06 15:25:35.117332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.397 [2024-11-06 15:25:35.117348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.397 [2024-11-06 15:25:35.117358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.397 [2024-11-06 15:25:35.118570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 [2024-11-06 15:25:35.745881] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f413f5b3940) succeed. 00:21:08.397 [2024-11-06 15:25:35.755209] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f413f56f940) succeed. 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 [2024-11-06 15:25:35.837998] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 NULL1 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.397 15:25:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:08.397 [2024-11-06 15:25:35.924969] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:08.397 [2024-11-06 15:25:35.925043] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117805 ] 00:21:08.656 Attached to nqn.2016-06.io.spdk:cnode1 00:21:08.656 Namespace ID: 1 size: 1GB 00:21:08.656 fused_ordering(0) 00:21:08.656 fused_ordering(1) 00:21:08.656 fused_ordering(2) 00:21:08.656 fused_ordering(3) 00:21:08.656 fused_ordering(4) 00:21:08.656 fused_ordering(5) 00:21:08.656 fused_ordering(6) 00:21:08.656 fused_ordering(7) 00:21:08.656 fused_ordering(8) 00:21:08.656 fused_ordering(9) 00:21:08.656 fused_ordering(10) 00:21:08.656 fused_ordering(11) 00:21:08.656 fused_ordering(12) 00:21:08.656 fused_ordering(13) 00:21:08.656 fused_ordering(14) 00:21:08.656 fused_ordering(15) 00:21:08.656 fused_ordering(16) 00:21:08.656 fused_ordering(17) 00:21:08.656 fused_ordering(18) 00:21:08.656 fused_ordering(19) 00:21:08.656 fused_ordering(20) 00:21:08.656 fused_ordering(21) 00:21:08.656 fused_ordering(22) 00:21:08.656 fused_ordering(23) 00:21:08.656 fused_ordering(24) 00:21:08.656 fused_ordering(25) 00:21:08.656 fused_ordering(26) 00:21:08.656 fused_ordering(27) 00:21:08.656 fused_ordering(28) 00:21:08.656 fused_ordering(29) 00:21:08.656 fused_ordering(30) 00:21:08.656 fused_ordering(31) 00:21:08.656 fused_ordering(32) 00:21:08.656 fused_ordering(33) 00:21:08.656 fused_ordering(34) 00:21:08.656 fused_ordering(35) 00:21:08.656 fused_ordering(36) 00:21:08.656 fused_ordering(37) 00:21:08.656 fused_ordering(38) 00:21:08.656 fused_ordering(39) 00:21:08.656 fused_ordering(40) 00:21:08.656 fused_ordering(41) 00:21:08.656 fused_ordering(42) 00:21:08.656 fused_ordering(43) 00:21:08.656 fused_ordering(44) 00:21:08.657 fused_ordering(45) 00:21:08.657 fused_ordering(46) 00:21:08.657 fused_ordering(47) 00:21:08.657 fused_ordering(48) 00:21:08.657 fused_ordering(49) 00:21:08.657 fused_ordering(50) 00:21:08.657 fused_ordering(51) 00:21:08.657 fused_ordering(52) 00:21:08.657 fused_ordering(53) 00:21:08.657 fused_ordering(54) 00:21:08.657 fused_ordering(55) 00:21:08.657 fused_ordering(56) 00:21:08.657 fused_ordering(57) 00:21:08.657 fused_ordering(58) 00:21:08.657 fused_ordering(59) 00:21:08.657 fused_ordering(60) 00:21:08.657 fused_ordering(61) 00:21:08.657 fused_ordering(62) 00:21:08.657 fused_ordering(63) 00:21:08.657 fused_ordering(64) 00:21:08.657 fused_ordering(65) 00:21:08.657 fused_ordering(66) 00:21:08.657 fused_ordering(67) 00:21:08.657 fused_ordering(68) 00:21:08.657 fused_ordering(69) 00:21:08.657 fused_ordering(70) 00:21:08.657 fused_ordering(71) 00:21:08.657 fused_ordering(72) 00:21:08.657 fused_ordering(73) 00:21:08.657 fused_ordering(74) 00:21:08.657 fused_ordering(75) 00:21:08.657 fused_ordering(76) 00:21:08.657 fused_ordering(77) 00:21:08.657 fused_ordering(78) 00:21:08.657 fused_ordering(79) 00:21:08.657 fused_ordering(80) 00:21:08.657 fused_ordering(81) 00:21:08.657 fused_ordering(82) 00:21:08.657 fused_ordering(83) 00:21:08.657 fused_ordering(84) 00:21:08.657 fused_ordering(85) 00:21:08.657 fused_ordering(86) 00:21:08.657 fused_ordering(87) 00:21:08.657 fused_ordering(88) 00:21:08.657 fused_ordering(89) 00:21:08.657 fused_ordering(90) 00:21:08.657 fused_ordering(91) 00:21:08.657 fused_ordering(92) 00:21:08.657 fused_ordering(93) 00:21:08.657 fused_ordering(94) 00:21:08.657 fused_ordering(95) 00:21:08.657 fused_ordering(96) 00:21:08.657 fused_ordering(97) 00:21:08.657 fused_ordering(98) 00:21:08.657 fused_ordering(99) 00:21:08.657 fused_ordering(100) 00:21:08.657 fused_ordering(101) 00:21:08.657 fused_ordering(102) 00:21:08.657 fused_ordering(103) 00:21:08.657 fused_ordering(104) 00:21:08.657 fused_ordering(105) 00:21:08.657 fused_ordering(106) 00:21:08.657 fused_ordering(107) 00:21:08.657 fused_ordering(108) 00:21:08.657 fused_ordering(109) 00:21:08.657 fused_ordering(110) 00:21:08.657 fused_ordering(111) 00:21:08.657 fused_ordering(112) 00:21:08.657 fused_ordering(113) 00:21:08.657 fused_ordering(114) 00:21:08.657 fused_ordering(115) 00:21:08.657 fused_ordering(116) 00:21:08.657 fused_ordering(117) 00:21:08.657 fused_ordering(118) 00:21:08.657 fused_ordering(119) 00:21:08.657 fused_ordering(120) 00:21:08.657 fused_ordering(121) 00:21:08.657 fused_ordering(122) 00:21:08.657 fused_ordering(123) 00:21:08.657 fused_ordering(124) 00:21:08.657 fused_ordering(125) 00:21:08.657 fused_ordering(126) 00:21:08.657 fused_ordering(127) 00:21:08.657 fused_ordering(128) 00:21:08.657 fused_ordering(129) 00:21:08.657 fused_ordering(130) 00:21:08.657 fused_ordering(131) 00:21:08.657 fused_ordering(132) 00:21:08.657 fused_ordering(133) 00:21:08.657 fused_ordering(134) 00:21:08.657 fused_ordering(135) 00:21:08.657 fused_ordering(136) 00:21:08.657 fused_ordering(137) 00:21:08.657 fused_ordering(138) 00:21:08.657 fused_ordering(139) 00:21:08.657 fused_ordering(140) 00:21:08.657 fused_ordering(141) 00:21:08.657 fused_ordering(142) 00:21:08.657 fused_ordering(143) 00:21:08.657 fused_ordering(144) 00:21:08.657 fused_ordering(145) 00:21:08.657 fused_ordering(146) 00:21:08.657 fused_ordering(147) 00:21:08.657 fused_ordering(148) 00:21:08.657 fused_ordering(149) 00:21:08.657 fused_ordering(150) 00:21:08.657 fused_ordering(151) 00:21:08.657 fused_ordering(152) 00:21:08.657 fused_ordering(153) 00:21:08.657 fused_ordering(154) 00:21:08.657 fused_ordering(155) 00:21:08.657 fused_ordering(156) 00:21:08.657 fused_ordering(157) 00:21:08.657 fused_ordering(158) 00:21:08.657 fused_ordering(159) 00:21:08.657 fused_ordering(160) 00:21:08.657 fused_ordering(161) 00:21:08.657 fused_ordering(162) 00:21:08.657 fused_ordering(163) 00:21:08.657 fused_ordering(164) 00:21:08.657 fused_ordering(165) 00:21:08.657 fused_ordering(166) 00:21:08.657 fused_ordering(167) 00:21:08.657 fused_ordering(168) 00:21:08.657 fused_ordering(169) 00:21:08.657 fused_ordering(170) 00:21:08.657 fused_ordering(171) 00:21:08.657 fused_ordering(172) 00:21:08.657 fused_ordering(173) 00:21:08.657 fused_ordering(174) 00:21:08.657 fused_ordering(175) 00:21:08.657 fused_ordering(176) 00:21:08.657 fused_ordering(177) 00:21:08.657 fused_ordering(178) 00:21:08.657 fused_ordering(179) 00:21:08.657 fused_ordering(180) 00:21:08.657 fused_ordering(181) 00:21:08.657 fused_ordering(182) 00:21:08.657 fused_ordering(183) 00:21:08.657 fused_ordering(184) 00:21:08.657 fused_ordering(185) 00:21:08.657 fused_ordering(186) 00:21:08.657 fused_ordering(187) 00:21:08.657 fused_ordering(188) 00:21:08.657 fused_ordering(189) 00:21:08.657 fused_ordering(190) 00:21:08.657 fused_ordering(191) 00:21:08.657 fused_ordering(192) 00:21:08.657 fused_ordering(193) 00:21:08.657 fused_ordering(194) 00:21:08.657 fused_ordering(195) 00:21:08.657 fused_ordering(196) 00:21:08.657 fused_ordering(197) 00:21:08.657 fused_ordering(198) 00:21:08.657 fused_ordering(199) 00:21:08.657 fused_ordering(200) 00:21:08.657 fused_ordering(201) 00:21:08.657 fused_ordering(202) 00:21:08.657 fused_ordering(203) 00:21:08.657 fused_ordering(204) 00:21:08.657 fused_ordering(205) 00:21:08.916 fused_ordering(206) 00:21:08.916 fused_ordering(207) 00:21:08.916 fused_ordering(208) 00:21:08.916 fused_ordering(209) 00:21:08.916 fused_ordering(210) 00:21:08.916 fused_ordering(211) 00:21:08.916 fused_ordering(212) 00:21:08.916 fused_ordering(213) 00:21:08.916 fused_ordering(214) 00:21:08.916 fused_ordering(215) 00:21:08.916 fused_ordering(216) 00:21:08.916 fused_ordering(217) 00:21:08.916 fused_ordering(218) 00:21:08.916 fused_ordering(219) 00:21:08.916 fused_ordering(220) 00:21:08.916 fused_ordering(221) 00:21:08.916 fused_ordering(222) 00:21:08.916 fused_ordering(223) 00:21:08.916 fused_ordering(224) 00:21:08.916 fused_ordering(225) 00:21:08.916 fused_ordering(226) 00:21:08.916 fused_ordering(227) 00:21:08.916 fused_ordering(228) 00:21:08.916 fused_ordering(229) 00:21:08.916 fused_ordering(230) 00:21:08.916 fused_ordering(231) 00:21:08.916 fused_ordering(232) 00:21:08.916 fused_ordering(233) 00:21:08.916 fused_ordering(234) 00:21:08.916 fused_ordering(235) 00:21:08.916 fused_ordering(236) 00:21:08.916 fused_ordering(237) 00:21:08.916 fused_ordering(238) 00:21:08.916 fused_ordering(239) 00:21:08.916 fused_ordering(240) 00:21:08.916 fused_ordering(241) 00:21:08.916 fused_ordering(242) 00:21:08.916 fused_ordering(243) 00:21:08.916 fused_ordering(244) 00:21:08.916 fused_ordering(245) 00:21:08.916 fused_ordering(246) 00:21:08.916 fused_ordering(247) 00:21:08.916 fused_ordering(248) 00:21:08.916 fused_ordering(249) 00:21:08.916 fused_ordering(250) 00:21:08.916 fused_ordering(251) 00:21:08.916 fused_ordering(252) 00:21:08.916 fused_ordering(253) 00:21:08.916 fused_ordering(254) 00:21:08.916 fused_ordering(255) 00:21:08.916 fused_ordering(256) 00:21:08.916 fused_ordering(257) 00:21:08.916 fused_ordering(258) 00:21:08.916 fused_ordering(259) 00:21:08.916 fused_ordering(260) 00:21:08.916 fused_ordering(261) 00:21:08.916 fused_ordering(262) 00:21:08.916 fused_ordering(263) 00:21:08.916 fused_ordering(264) 00:21:08.916 fused_ordering(265) 00:21:08.916 fused_ordering(266) 00:21:08.916 fused_ordering(267) 00:21:08.916 fused_ordering(268) 00:21:08.916 fused_ordering(269) 00:21:08.916 fused_ordering(270) 00:21:08.916 fused_ordering(271) 00:21:08.916 fused_ordering(272) 00:21:08.916 fused_ordering(273) 00:21:08.916 fused_ordering(274) 00:21:08.916 fused_ordering(275) 00:21:08.916 fused_ordering(276) 00:21:08.916 fused_ordering(277) 00:21:08.916 fused_ordering(278) 00:21:08.916 fused_ordering(279) 00:21:08.916 fused_ordering(280) 00:21:08.916 fused_ordering(281) 00:21:08.916 fused_ordering(282) 00:21:08.916 fused_ordering(283) 00:21:08.916 fused_ordering(284) 00:21:08.916 fused_ordering(285) 00:21:08.916 fused_ordering(286) 00:21:08.916 fused_ordering(287) 00:21:08.916 fused_ordering(288) 00:21:08.916 fused_ordering(289) 00:21:08.916 fused_ordering(290) 00:21:08.916 fused_ordering(291) 00:21:08.916 fused_ordering(292) 00:21:08.916 fused_ordering(293) 00:21:08.916 fused_ordering(294) 00:21:08.916 fused_ordering(295) 00:21:08.916 fused_ordering(296) 00:21:08.916 fused_ordering(297) 00:21:08.916 fused_ordering(298) 00:21:08.916 fused_ordering(299) 00:21:08.916 fused_ordering(300) 00:21:08.916 fused_ordering(301) 00:21:08.916 fused_ordering(302) 00:21:08.916 fused_ordering(303) 00:21:08.916 fused_ordering(304) 00:21:08.916 fused_ordering(305) 00:21:08.916 fused_ordering(306) 00:21:08.916 fused_ordering(307) 00:21:08.916 fused_ordering(308) 00:21:08.916 fused_ordering(309) 00:21:08.916 fused_ordering(310) 00:21:08.916 fused_ordering(311) 00:21:08.916 fused_ordering(312) 00:21:08.916 fused_ordering(313) 00:21:08.916 fused_ordering(314) 00:21:08.916 fused_ordering(315) 00:21:08.916 fused_ordering(316) 00:21:08.916 fused_ordering(317) 00:21:08.916 fused_ordering(318) 00:21:08.916 fused_ordering(319) 00:21:08.916 fused_ordering(320) 00:21:08.916 fused_ordering(321) 00:21:08.916 fused_ordering(322) 00:21:08.916 fused_ordering(323) 00:21:08.916 fused_ordering(324) 00:21:08.916 fused_ordering(325) 00:21:08.916 fused_ordering(326) 00:21:08.916 fused_ordering(327) 00:21:08.916 fused_ordering(328) 00:21:08.916 fused_ordering(329) 00:21:08.916 fused_ordering(330) 00:21:08.916 fused_ordering(331) 00:21:08.916 fused_ordering(332) 00:21:08.916 fused_ordering(333) 00:21:08.916 fused_ordering(334) 00:21:08.916 fused_ordering(335) 00:21:08.916 fused_ordering(336) 00:21:08.916 fused_ordering(337) 00:21:08.916 fused_ordering(338) 00:21:08.916 fused_ordering(339) 00:21:08.916 fused_ordering(340) 00:21:08.916 fused_ordering(341) 00:21:08.916 fused_ordering(342) 00:21:08.916 fused_ordering(343) 00:21:08.916 fused_ordering(344) 00:21:08.916 fused_ordering(345) 00:21:08.916 fused_ordering(346) 00:21:08.916 fused_ordering(347) 00:21:08.916 fused_ordering(348) 00:21:08.916 fused_ordering(349) 00:21:08.916 fused_ordering(350) 00:21:08.916 fused_ordering(351) 00:21:08.916 fused_ordering(352) 00:21:08.916 fused_ordering(353) 00:21:08.916 fused_ordering(354) 00:21:08.916 fused_ordering(355) 00:21:08.916 fused_ordering(356) 00:21:08.916 fused_ordering(357) 00:21:08.916 fused_ordering(358) 00:21:08.916 fused_ordering(359) 00:21:08.916 fused_ordering(360) 00:21:08.916 fused_ordering(361) 00:21:08.916 fused_ordering(362) 00:21:08.916 fused_ordering(363) 00:21:08.916 fused_ordering(364) 00:21:08.916 fused_ordering(365) 00:21:08.916 fused_ordering(366) 00:21:08.916 fused_ordering(367) 00:21:08.916 fused_ordering(368) 00:21:08.916 fused_ordering(369) 00:21:08.916 fused_ordering(370) 00:21:08.916 fused_ordering(371) 00:21:08.916 fused_ordering(372) 00:21:08.916 fused_ordering(373) 00:21:08.916 fused_ordering(374) 00:21:08.916 fused_ordering(375) 00:21:08.916 fused_ordering(376) 00:21:08.916 fused_ordering(377) 00:21:08.916 fused_ordering(378) 00:21:08.916 fused_ordering(379) 00:21:08.916 fused_ordering(380) 00:21:08.916 fused_ordering(381) 00:21:08.916 fused_ordering(382) 00:21:08.917 fused_ordering(383) 00:21:08.917 fused_ordering(384) 00:21:08.917 fused_ordering(385) 00:21:08.917 fused_ordering(386) 00:21:08.917 fused_ordering(387) 00:21:08.917 fused_ordering(388) 00:21:08.917 fused_ordering(389) 00:21:08.917 fused_ordering(390) 00:21:08.917 fused_ordering(391) 00:21:08.917 fused_ordering(392) 00:21:08.917 fused_ordering(393) 00:21:08.917 fused_ordering(394) 00:21:08.917 fused_ordering(395) 00:21:08.917 fused_ordering(396) 00:21:08.917 fused_ordering(397) 00:21:08.917 fused_ordering(398) 00:21:08.917 fused_ordering(399) 00:21:08.917 fused_ordering(400) 00:21:08.917 fused_ordering(401) 00:21:08.917 fused_ordering(402) 00:21:08.917 fused_ordering(403) 00:21:08.917 fused_ordering(404) 00:21:08.917 fused_ordering(405) 00:21:08.917 fused_ordering(406) 00:21:08.917 fused_ordering(407) 00:21:08.917 fused_ordering(408) 00:21:08.917 fused_ordering(409) 00:21:08.917 fused_ordering(410) 00:21:08.917 fused_ordering(411) 00:21:08.917 fused_ordering(412) 00:21:08.917 fused_ordering(413) 00:21:08.917 fused_ordering(414) 00:21:08.917 fused_ordering(415) 00:21:08.917 fused_ordering(416) 00:21:08.917 fused_ordering(417) 00:21:08.917 fused_ordering(418) 00:21:08.917 fused_ordering(419) 00:21:08.917 fused_ordering(420) 00:21:08.917 fused_ordering(421) 00:21:08.917 fused_ordering(422) 00:21:08.917 fused_ordering(423) 00:21:08.917 fused_ordering(424) 00:21:08.917 fused_ordering(425) 00:21:08.917 fused_ordering(426) 00:21:08.917 fused_ordering(427) 00:21:08.917 fused_ordering(428) 00:21:08.917 fused_ordering(429) 00:21:08.917 fused_ordering(430) 00:21:08.917 fused_ordering(431) 00:21:08.917 fused_ordering(432) 00:21:08.917 fused_ordering(433) 00:21:08.917 fused_ordering(434) 00:21:08.917 fused_ordering(435) 00:21:08.917 fused_ordering(436) 00:21:08.917 fused_ordering(437) 00:21:08.917 fused_ordering(438) 00:21:08.917 fused_ordering(439) 00:21:08.917 fused_ordering(440) 00:21:08.917 fused_ordering(441) 00:21:08.917 fused_ordering(442) 00:21:08.917 fused_ordering(443) 00:21:08.917 fused_ordering(444) 00:21:08.917 fused_ordering(445) 00:21:08.917 fused_ordering(446) 00:21:08.917 fused_ordering(447) 00:21:08.917 fused_ordering(448) 00:21:08.917 fused_ordering(449) 00:21:08.917 fused_ordering(450) 00:21:08.917 fused_ordering(451) 00:21:08.917 fused_ordering(452) 00:21:08.917 fused_ordering(453) 00:21:08.917 fused_ordering(454) 00:21:08.917 fused_ordering(455) 00:21:08.917 fused_ordering(456) 00:21:08.917 fused_ordering(457) 00:21:08.917 fused_ordering(458) 00:21:08.917 fused_ordering(459) 00:21:08.917 fused_ordering(460) 00:21:08.917 fused_ordering(461) 00:21:08.917 fused_ordering(462) 00:21:08.917 fused_ordering(463) 00:21:08.917 fused_ordering(464) 00:21:08.917 fused_ordering(465) 00:21:08.917 fused_ordering(466) 00:21:08.917 fused_ordering(467) 00:21:08.917 fused_ordering(468) 00:21:08.917 fused_ordering(469) 00:21:08.917 fused_ordering(470) 00:21:08.917 fused_ordering(471) 00:21:08.917 fused_ordering(472) 00:21:08.917 fused_ordering(473) 00:21:08.917 fused_ordering(474) 00:21:08.917 fused_ordering(475) 00:21:08.917 fused_ordering(476) 00:21:08.917 fused_ordering(477) 00:21:08.917 fused_ordering(478) 00:21:08.917 fused_ordering(479) 00:21:08.917 fused_ordering(480) 00:21:08.917 fused_ordering(481) 00:21:08.917 fused_ordering(482) 00:21:08.917 fused_ordering(483) 00:21:08.917 fused_ordering(484) 00:21:08.917 fused_ordering(485) 00:21:08.917 fused_ordering(486) 00:21:08.917 fused_ordering(487) 00:21:08.917 fused_ordering(488) 00:21:08.917 fused_ordering(489) 00:21:08.917 fused_ordering(490) 00:21:08.917 fused_ordering(491) 00:21:08.917 fused_ordering(492) 00:21:08.917 fused_ordering(493) 00:21:08.917 fused_ordering(494) 00:21:08.917 fused_ordering(495) 00:21:08.917 fused_ordering(496) 00:21:08.917 fused_ordering(497) 00:21:08.917 fused_ordering(498) 00:21:08.917 fused_ordering(499) 00:21:08.917 fused_ordering(500) 00:21:08.917 fused_ordering(501) 00:21:08.917 fused_ordering(502) 00:21:08.917 fused_ordering(503) 00:21:08.917 fused_ordering(504) 00:21:08.917 fused_ordering(505) 00:21:08.917 fused_ordering(506) 00:21:08.917 fused_ordering(507) 00:21:08.917 fused_ordering(508) 00:21:08.917 fused_ordering(509) 00:21:08.917 fused_ordering(510) 00:21:08.917 fused_ordering(511) 00:21:08.917 fused_ordering(512) 00:21:08.917 fused_ordering(513) 00:21:08.917 fused_ordering(514) 00:21:08.917 fused_ordering(515) 00:21:08.917 fused_ordering(516) 00:21:08.917 fused_ordering(517) 00:21:08.917 fused_ordering(518) 00:21:08.917 fused_ordering(519) 00:21:08.917 fused_ordering(520) 00:21:08.917 fused_ordering(521) 00:21:08.917 fused_ordering(522) 00:21:08.917 fused_ordering(523) 00:21:08.917 fused_ordering(524) 00:21:08.917 fused_ordering(525) 00:21:08.917 fused_ordering(526) 00:21:08.917 fused_ordering(527) 00:21:08.917 fused_ordering(528) 00:21:08.917 fused_ordering(529) 00:21:08.917 fused_ordering(530) 00:21:08.917 fused_ordering(531) 00:21:08.917 fused_ordering(532) 00:21:08.917 fused_ordering(533) 00:21:08.917 fused_ordering(534) 00:21:08.917 fused_ordering(535) 00:21:08.917 fused_ordering(536) 00:21:08.917 fused_ordering(537) 00:21:08.917 fused_ordering(538) 00:21:08.917 fused_ordering(539) 00:21:08.917 fused_ordering(540) 00:21:08.917 fused_ordering(541) 00:21:08.917 fused_ordering(542) 00:21:08.917 fused_ordering(543) 00:21:08.917 fused_ordering(544) 00:21:08.917 fused_ordering(545) 00:21:08.917 fused_ordering(546) 00:21:08.917 fused_ordering(547) 00:21:08.917 fused_ordering(548) 00:21:08.917 fused_ordering(549) 00:21:08.917 fused_ordering(550) 00:21:08.917 fused_ordering(551) 00:21:08.917 fused_ordering(552) 00:21:08.917 fused_ordering(553) 00:21:08.917 fused_ordering(554) 00:21:08.917 fused_ordering(555) 00:21:08.917 fused_ordering(556) 00:21:08.917 fused_ordering(557) 00:21:08.917 fused_ordering(558) 00:21:08.917 fused_ordering(559) 00:21:08.917 fused_ordering(560) 00:21:08.917 fused_ordering(561) 00:21:08.917 fused_ordering(562) 00:21:08.917 fused_ordering(563) 00:21:08.917 fused_ordering(564) 00:21:08.917 fused_ordering(565) 00:21:08.917 fused_ordering(566) 00:21:08.917 fused_ordering(567) 00:21:08.917 fused_ordering(568) 00:21:08.917 fused_ordering(569) 00:21:08.917 fused_ordering(570) 00:21:08.917 fused_ordering(571) 00:21:08.917 fused_ordering(572) 00:21:08.917 fused_ordering(573) 00:21:08.917 fused_ordering(574) 00:21:08.917 fused_ordering(575) 00:21:08.917 fused_ordering(576) 00:21:08.917 fused_ordering(577) 00:21:08.917 fused_ordering(578) 00:21:08.917 fused_ordering(579) 00:21:08.917 fused_ordering(580) 00:21:08.917 fused_ordering(581) 00:21:08.917 fused_ordering(582) 00:21:08.917 fused_ordering(583) 00:21:08.917 fused_ordering(584) 00:21:08.917 fused_ordering(585) 00:21:08.917 fused_ordering(586) 00:21:08.917 fused_ordering(587) 00:21:08.917 fused_ordering(588) 00:21:08.917 fused_ordering(589) 00:21:08.917 fused_ordering(590) 00:21:08.917 fused_ordering(591) 00:21:08.917 fused_ordering(592) 00:21:08.917 fused_ordering(593) 00:21:08.917 fused_ordering(594) 00:21:08.917 fused_ordering(595) 00:21:08.917 fused_ordering(596) 00:21:08.917 fused_ordering(597) 00:21:08.917 fused_ordering(598) 00:21:08.917 fused_ordering(599) 00:21:08.917 fused_ordering(600) 00:21:08.917 fused_ordering(601) 00:21:08.917 fused_ordering(602) 00:21:08.917 fused_ordering(603) 00:21:08.917 fused_ordering(604) 00:21:08.917 fused_ordering(605) 00:21:08.917 fused_ordering(606) 00:21:08.917 fused_ordering(607) 00:21:08.917 fused_ordering(608) 00:21:08.917 fused_ordering(609) 00:21:08.917 fused_ordering(610) 00:21:08.917 fused_ordering(611) 00:21:08.917 fused_ordering(612) 00:21:08.917 fused_ordering(613) 00:21:08.917 fused_ordering(614) 00:21:08.917 fused_ordering(615) 00:21:09.176 fused_ordering(616) 00:21:09.176 fused_ordering(617) 00:21:09.176 fused_ordering(618) 00:21:09.176 fused_ordering(619) 00:21:09.176 fused_ordering(620) 00:21:09.176 fused_ordering(621) 00:21:09.176 fused_ordering(622) 00:21:09.176 fused_ordering(623) 00:21:09.176 fused_ordering(624) 00:21:09.176 fused_ordering(625) 00:21:09.176 fused_ordering(626) 00:21:09.176 fused_ordering(627) 00:21:09.176 fused_ordering(628) 00:21:09.176 fused_ordering(629) 00:21:09.176 fused_ordering(630) 00:21:09.176 fused_ordering(631) 00:21:09.176 fused_ordering(632) 00:21:09.176 fused_ordering(633) 00:21:09.176 fused_ordering(634) 00:21:09.176 fused_ordering(635) 00:21:09.176 fused_ordering(636) 00:21:09.176 fused_ordering(637) 00:21:09.176 fused_ordering(638) 00:21:09.176 fused_ordering(639) 00:21:09.176 fused_ordering(640) 00:21:09.176 fused_ordering(641) 00:21:09.176 fused_ordering(642) 00:21:09.176 fused_ordering(643) 00:21:09.176 fused_ordering(644) 00:21:09.176 fused_ordering(645) 00:21:09.176 fused_ordering(646) 00:21:09.176 fused_ordering(647) 00:21:09.176 fused_ordering(648) 00:21:09.176 fused_ordering(649) 00:21:09.176 fused_ordering(650) 00:21:09.176 fused_ordering(651) 00:21:09.176 fused_ordering(652) 00:21:09.176 fused_ordering(653) 00:21:09.176 fused_ordering(654) 00:21:09.176 fused_ordering(655) 00:21:09.176 fused_ordering(656) 00:21:09.176 fused_ordering(657) 00:21:09.176 fused_ordering(658) 00:21:09.176 fused_ordering(659) 00:21:09.176 fused_ordering(660) 00:21:09.176 fused_ordering(661) 00:21:09.176 fused_ordering(662) 00:21:09.176 fused_ordering(663) 00:21:09.176 fused_ordering(664) 00:21:09.176 fused_ordering(665) 00:21:09.176 fused_ordering(666) 00:21:09.176 fused_ordering(667) 00:21:09.176 fused_ordering(668) 00:21:09.176 fused_ordering(669) 00:21:09.176 fused_ordering(670) 00:21:09.176 fused_ordering(671) 00:21:09.176 fused_ordering(672) 00:21:09.176 fused_ordering(673) 00:21:09.176 fused_ordering(674) 00:21:09.176 fused_ordering(675) 00:21:09.176 fused_ordering(676) 00:21:09.176 fused_ordering(677) 00:21:09.176 fused_ordering(678) 00:21:09.176 fused_ordering(679) 00:21:09.176 fused_ordering(680) 00:21:09.176 fused_ordering(681) 00:21:09.176 fused_ordering(682) 00:21:09.176 fused_ordering(683) 00:21:09.176 fused_ordering(684) 00:21:09.176 fused_ordering(685) 00:21:09.176 fused_ordering(686) 00:21:09.176 fused_ordering(687) 00:21:09.176 fused_ordering(688) 00:21:09.176 fused_ordering(689) 00:21:09.176 fused_ordering(690) 00:21:09.176 fused_ordering(691) 00:21:09.176 fused_ordering(692) 00:21:09.176 fused_ordering(693) 00:21:09.176 fused_ordering(694) 00:21:09.176 fused_ordering(695) 00:21:09.176 fused_ordering(696) 00:21:09.176 fused_ordering(697) 00:21:09.176 fused_ordering(698) 00:21:09.176 fused_ordering(699) 00:21:09.176 fused_ordering(700) 00:21:09.176 fused_ordering(701) 00:21:09.176 fused_ordering(702) 00:21:09.176 fused_ordering(703) 00:21:09.176 fused_ordering(704) 00:21:09.176 fused_ordering(705) 00:21:09.176 fused_ordering(706) 00:21:09.176 fused_ordering(707) 00:21:09.176 fused_ordering(708) 00:21:09.176 fused_ordering(709) 00:21:09.176 fused_ordering(710) 00:21:09.176 fused_ordering(711) 00:21:09.176 fused_ordering(712) 00:21:09.176 fused_ordering(713) 00:21:09.176 fused_ordering(714) 00:21:09.176 fused_ordering(715) 00:21:09.176 fused_ordering(716) 00:21:09.176 fused_ordering(717) 00:21:09.176 fused_ordering(718) 00:21:09.176 fused_ordering(719) 00:21:09.176 fused_ordering(720) 00:21:09.176 fused_ordering(721) 00:21:09.176 fused_ordering(722) 00:21:09.176 fused_ordering(723) 00:21:09.176 fused_ordering(724) 00:21:09.176 fused_ordering(725) 00:21:09.176 fused_ordering(726) 00:21:09.176 fused_ordering(727) 00:21:09.176 fused_ordering(728) 00:21:09.176 fused_ordering(729) 00:21:09.176 fused_ordering(730) 00:21:09.176 fused_ordering(731) 00:21:09.176 fused_ordering(732) 00:21:09.176 fused_ordering(733) 00:21:09.176 fused_ordering(734) 00:21:09.176 fused_ordering(735) 00:21:09.176 fused_ordering(736) 00:21:09.176 fused_ordering(737) 00:21:09.176 fused_ordering(738) 00:21:09.176 fused_ordering(739) 00:21:09.176 fused_ordering(740) 00:21:09.176 fused_ordering(741) 00:21:09.176 fused_ordering(742) 00:21:09.176 fused_ordering(743) 00:21:09.176 fused_ordering(744) 00:21:09.176 fused_ordering(745) 00:21:09.176 fused_ordering(746) 00:21:09.176 fused_ordering(747) 00:21:09.176 fused_ordering(748) 00:21:09.176 fused_ordering(749) 00:21:09.176 fused_ordering(750) 00:21:09.176 fused_ordering(751) 00:21:09.176 fused_ordering(752) 00:21:09.176 fused_ordering(753) 00:21:09.176 fused_ordering(754) 00:21:09.176 fused_ordering(755) 00:21:09.176 fused_ordering(756) 00:21:09.176 fused_ordering(757) 00:21:09.176 fused_ordering(758) 00:21:09.176 fused_ordering(759) 00:21:09.176 fused_ordering(760) 00:21:09.176 fused_ordering(761) 00:21:09.176 fused_ordering(762) 00:21:09.176 fused_ordering(763) 00:21:09.176 fused_ordering(764) 00:21:09.176 fused_ordering(765) 00:21:09.176 fused_ordering(766) 00:21:09.176 fused_ordering(767) 00:21:09.176 fused_ordering(768) 00:21:09.176 fused_ordering(769) 00:21:09.176 fused_ordering(770) 00:21:09.176 fused_ordering(771) 00:21:09.176 fused_ordering(772) 00:21:09.176 fused_ordering(773) 00:21:09.176 fused_ordering(774) 00:21:09.176 fused_ordering(775) 00:21:09.176 fused_ordering(776) 00:21:09.176 fused_ordering(777) 00:21:09.176 fused_ordering(778) 00:21:09.176 fused_ordering(779) 00:21:09.176 fused_ordering(780) 00:21:09.176 fused_ordering(781) 00:21:09.176 fused_ordering(782) 00:21:09.176 fused_ordering(783) 00:21:09.176 fused_ordering(784) 00:21:09.176 fused_ordering(785) 00:21:09.176 fused_ordering(786) 00:21:09.176 fused_ordering(787) 00:21:09.176 fused_ordering(788) 00:21:09.176 fused_ordering(789) 00:21:09.176 fused_ordering(790) 00:21:09.176 fused_ordering(791) 00:21:09.176 fused_ordering(792) 00:21:09.176 fused_ordering(793) 00:21:09.176 fused_ordering(794) 00:21:09.176 fused_ordering(795) 00:21:09.177 fused_ordering(796) 00:21:09.177 fused_ordering(797) 00:21:09.177 fused_ordering(798) 00:21:09.177 fused_ordering(799) 00:21:09.177 fused_ordering(800) 00:21:09.177 fused_ordering(801) 00:21:09.177 fused_ordering(802) 00:21:09.177 fused_ordering(803) 00:21:09.177 fused_ordering(804) 00:21:09.177 fused_ordering(805) 00:21:09.177 fused_ordering(806) 00:21:09.177 fused_ordering(807) 00:21:09.177 fused_ordering(808) 00:21:09.177 fused_ordering(809) 00:21:09.177 fused_ordering(810) 00:21:09.177 fused_ordering(811) 00:21:09.177 fused_ordering(812) 00:21:09.177 fused_ordering(813) 00:21:09.177 fused_ordering(814) 00:21:09.177 fused_ordering(815) 00:21:09.177 fused_ordering(816) 00:21:09.177 fused_ordering(817) 00:21:09.177 fused_ordering(818) 00:21:09.177 fused_ordering(819) 00:21:09.177 fused_ordering(820) 00:21:09.436 fused_ordering(821) 00:21:09.436 fused_ordering(822) 00:21:09.436 fused_ordering(823) 00:21:09.436 fused_ordering(824) 00:21:09.436 fused_ordering(825) 00:21:09.436 fused_ordering(826) 00:21:09.436 fused_ordering(827) 00:21:09.436 fused_ordering(828) 00:21:09.436 fused_ordering(829) 00:21:09.436 fused_ordering(830) 00:21:09.436 fused_ordering(831) 00:21:09.436 fused_ordering(832) 00:21:09.436 fused_ordering(833) 00:21:09.436 fused_ordering(834) 00:21:09.436 fused_ordering(835) 00:21:09.436 fused_ordering(836) 00:21:09.436 fused_ordering(837) 00:21:09.436 fused_ordering(838) 00:21:09.436 fused_ordering(839) 00:21:09.436 fused_ordering(840) 00:21:09.436 fused_ordering(841) 00:21:09.436 fused_ordering(842) 00:21:09.436 fused_ordering(843) 00:21:09.436 fused_ordering(844) 00:21:09.436 fused_ordering(845) 00:21:09.436 fused_ordering(846) 00:21:09.436 fused_ordering(847) 00:21:09.436 fused_ordering(848) 00:21:09.436 fused_ordering(849) 00:21:09.436 fused_ordering(850) 00:21:09.436 fused_ordering(851) 00:21:09.436 fused_ordering(852) 00:21:09.436 fused_ordering(853) 00:21:09.436 fused_ordering(854) 00:21:09.436 fused_ordering(855) 00:21:09.436 fused_ordering(856) 00:21:09.436 fused_ordering(857) 00:21:09.436 fused_ordering(858) 00:21:09.436 fused_ordering(859) 00:21:09.436 fused_ordering(860) 00:21:09.436 fused_ordering(861) 00:21:09.436 fused_ordering(862) 00:21:09.436 fused_ordering(863) 00:21:09.436 fused_ordering(864) 00:21:09.436 fused_ordering(865) 00:21:09.436 fused_ordering(866) 00:21:09.436 fused_ordering(867) 00:21:09.436 fused_ordering(868) 00:21:09.436 fused_ordering(869) 00:21:09.436 fused_ordering(870) 00:21:09.436 fused_ordering(871) 00:21:09.436 fused_ordering(872) 00:21:09.436 fused_ordering(873) 00:21:09.436 fused_ordering(874) 00:21:09.436 fused_ordering(875) 00:21:09.436 fused_ordering(876) 00:21:09.436 fused_ordering(877) 00:21:09.436 fused_ordering(878) 00:21:09.436 fused_ordering(879) 00:21:09.436 fused_ordering(880) 00:21:09.436 fused_ordering(881) 00:21:09.436 fused_ordering(882) 00:21:09.436 fused_ordering(883) 00:21:09.436 fused_ordering(884) 00:21:09.436 fused_ordering(885) 00:21:09.436 fused_ordering(886) 00:21:09.436 fused_ordering(887) 00:21:09.436 fused_ordering(888) 00:21:09.436 fused_ordering(889) 00:21:09.437 fused_ordering(890) 00:21:09.437 fused_ordering(891) 00:21:09.437 fused_ordering(892) 00:21:09.437 fused_ordering(893) 00:21:09.437 fused_ordering(894) 00:21:09.437 fused_ordering(895) 00:21:09.437 fused_ordering(896) 00:21:09.437 fused_ordering(897) 00:21:09.437 fused_ordering(898) 00:21:09.437 fused_ordering(899) 00:21:09.437 fused_ordering(900) 00:21:09.437 fused_ordering(901) 00:21:09.437 fused_ordering(902) 00:21:09.437 fused_ordering(903) 00:21:09.437 fused_ordering(904) 00:21:09.437 fused_ordering(905) 00:21:09.437 fused_ordering(906) 00:21:09.437 fused_ordering(907) 00:21:09.437 fused_ordering(908) 00:21:09.437 fused_ordering(909) 00:21:09.437 fused_ordering(910) 00:21:09.437 fused_ordering(911) 00:21:09.437 fused_ordering(912) 00:21:09.437 fused_ordering(913) 00:21:09.437 fused_ordering(914) 00:21:09.437 fused_ordering(915) 00:21:09.437 fused_ordering(916) 00:21:09.437 fused_ordering(917) 00:21:09.437 fused_ordering(918) 00:21:09.437 fused_ordering(919) 00:21:09.437 fused_ordering(920) 00:21:09.437 fused_ordering(921) 00:21:09.437 fused_ordering(922) 00:21:09.437 fused_ordering(923) 00:21:09.437 fused_ordering(924) 00:21:09.437 fused_ordering(925) 00:21:09.437 fused_ordering(926) 00:21:09.437 fused_ordering(927) 00:21:09.437 fused_ordering(928) 00:21:09.437 fused_ordering(929) 00:21:09.437 fused_ordering(930) 00:21:09.437 fused_ordering(931) 00:21:09.437 fused_ordering(932) 00:21:09.437 fused_ordering(933) 00:21:09.437 fused_ordering(934) 00:21:09.437 fused_ordering(935) 00:21:09.437 fused_ordering(936) 00:21:09.437 fused_ordering(937) 00:21:09.437 fused_ordering(938) 00:21:09.437 fused_ordering(939) 00:21:09.437 fused_ordering(940) 00:21:09.437 fused_ordering(941) 00:21:09.437 fused_ordering(942) 00:21:09.437 fused_ordering(943) 00:21:09.437 fused_ordering(944) 00:21:09.437 fused_ordering(945) 00:21:09.437 fused_ordering(946) 00:21:09.437 fused_ordering(947) 00:21:09.437 fused_ordering(948) 00:21:09.437 fused_ordering(949) 00:21:09.437 fused_ordering(950) 00:21:09.437 fused_ordering(951) 00:21:09.437 fused_ordering(952) 00:21:09.437 fused_ordering(953) 00:21:09.437 fused_ordering(954) 00:21:09.437 fused_ordering(955) 00:21:09.437 fused_ordering(956) 00:21:09.437 fused_ordering(957) 00:21:09.437 fused_ordering(958) 00:21:09.437 fused_ordering(959) 00:21:09.437 fused_ordering(960) 00:21:09.437 fused_ordering(961) 00:21:09.437 fused_ordering(962) 00:21:09.437 fused_ordering(963) 00:21:09.437 fused_ordering(964) 00:21:09.437 fused_ordering(965) 00:21:09.437 fused_ordering(966) 00:21:09.437 fused_ordering(967) 00:21:09.437 fused_ordering(968) 00:21:09.437 fused_ordering(969) 00:21:09.437 fused_ordering(970) 00:21:09.437 fused_ordering(971) 00:21:09.437 fused_ordering(972) 00:21:09.437 fused_ordering(973) 00:21:09.437 fused_ordering(974) 00:21:09.437 fused_ordering(975) 00:21:09.437 fused_ordering(976) 00:21:09.437 fused_ordering(977) 00:21:09.437 fused_ordering(978) 00:21:09.437 fused_ordering(979) 00:21:09.437 fused_ordering(980) 00:21:09.437 fused_ordering(981) 00:21:09.437 fused_ordering(982) 00:21:09.437 fused_ordering(983) 00:21:09.437 fused_ordering(984) 00:21:09.437 fused_ordering(985) 00:21:09.437 fused_ordering(986) 00:21:09.437 fused_ordering(987) 00:21:09.437 fused_ordering(988) 00:21:09.437 fused_ordering(989) 00:21:09.437 fused_ordering(990) 00:21:09.437 fused_ordering(991) 00:21:09.437 fused_ordering(992) 00:21:09.437 fused_ordering(993) 00:21:09.437 fused_ordering(994) 00:21:09.437 fused_ordering(995) 00:21:09.437 fused_ordering(996) 00:21:09.437 fused_ordering(997) 00:21:09.437 fused_ordering(998) 00:21:09.437 fused_ordering(999) 00:21:09.437 fused_ordering(1000) 00:21:09.437 fused_ordering(1001) 00:21:09.437 fused_ordering(1002) 00:21:09.437 fused_ordering(1003) 00:21:09.437 fused_ordering(1004) 00:21:09.437 fused_ordering(1005) 00:21:09.437 fused_ordering(1006) 00:21:09.437 fused_ordering(1007) 00:21:09.437 fused_ordering(1008) 00:21:09.437 fused_ordering(1009) 00:21:09.437 fused_ordering(1010) 00:21:09.437 fused_ordering(1011) 00:21:09.437 fused_ordering(1012) 00:21:09.437 fused_ordering(1013) 00:21:09.437 fused_ordering(1014) 00:21:09.437 fused_ordering(1015) 00:21:09.437 fused_ordering(1016) 00:21:09.437 fused_ordering(1017) 00:21:09.437 fused_ordering(1018) 00:21:09.437 fused_ordering(1019) 00:21:09.437 fused_ordering(1020) 00:21:09.437 fused_ordering(1021) 00:21:09.437 fused_ordering(1022) 00:21:09.437 fused_ordering(1023) 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:09.437 rmmod nvme_rdma 00:21:09.437 rmmod nvme_fabrics 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3117728 ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3117728 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3117728 ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3117728 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3117728 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3117728' 00:21:09.437 killing process with pid 3117728 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3117728 00:21:09.437 15:25:36 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3117728 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:10.815 00:21:10.815 real 0m10.669s 00:21:10.815 user 0m6.394s 00:21:10.815 sys 0m5.988s 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:10.815 ************************************ 00:21:10.815 END TEST nvmf_fused_ordering 00:21:10.815 ************************************ 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.815 ************************************ 00:21:10.815 START TEST nvmf_ns_masking 00:21:10.815 ************************************ 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:21:10.815 * Looking for test storage... 00:21:10.815 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.815 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:11.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.075 --rc genhtml_branch_coverage=1 00:21:11.075 --rc genhtml_function_coverage=1 00:21:11.075 --rc genhtml_legend=1 00:21:11.075 --rc geninfo_all_blocks=1 00:21:11.075 --rc geninfo_unexecuted_blocks=1 00:21:11.075 00:21:11.075 ' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:11.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.075 --rc genhtml_branch_coverage=1 00:21:11.075 --rc genhtml_function_coverage=1 00:21:11.075 --rc genhtml_legend=1 00:21:11.075 --rc geninfo_all_blocks=1 00:21:11.075 --rc geninfo_unexecuted_blocks=1 00:21:11.075 00:21:11.075 ' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:11.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.075 --rc genhtml_branch_coverage=1 00:21:11.075 --rc genhtml_function_coverage=1 00:21:11.075 --rc genhtml_legend=1 00:21:11.075 --rc geninfo_all_blocks=1 00:21:11.075 --rc geninfo_unexecuted_blocks=1 00:21:11.075 00:21:11.075 ' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:11.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.075 --rc genhtml_branch_coverage=1 00:21:11.075 --rc genhtml_function_coverage=1 00:21:11.075 --rc genhtml_legend=1 00:21:11.075 --rc geninfo_all_blocks=1 00:21:11.075 --rc geninfo_unexecuted_blocks=1 00:21:11.075 00:21:11.075 ' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.075 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.076 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8f69b1f5-cdd0-4fb8-a747-04ca88c3cb69 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d0ef5fcf-47ae-48a7-9302-c83e2ffa8bc6 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4e85436c-a7eb-46ee-9b77-1668dba7fca6 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.076 15:25:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:17.646 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:17.646 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:17.646 Found net devices under 0000:18:00.0: mlx_0_0 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:17.646 Found net devices under 0000:18:00.1: mlx_0_1 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:17.646 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:21:17.647 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:17.647 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:17.647 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:17.647 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:17.907 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:17.908 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.908 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:17.908 altname enp24s0f0np0 00:21:17.908 altname ens785f0np0 00:21:17.908 inet 192.168.100.8/24 scope global mlx_0_0 00:21:17.908 valid_lft forever preferred_lft forever 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:17.908 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.908 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:17.908 altname enp24s0f1np1 00:21:17.908 altname ens785f1np1 00:21:17.908 inet 192.168.100.9/24 scope global mlx_0_1 00:21:17.908 valid_lft forever preferred_lft forever 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.908 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:17.909 192.168.100.9' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:17.909 192.168.100.9' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:17.909 192.168.100.9' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3121075 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3121075 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3121075 ']' 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.909 15:25:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:18.169 [2024-11-06 15:25:45.614099] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:18.169 [2024-11-06 15:25:45.614230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.169 [2024-11-06 15:25:45.768459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.427 [2024-11-06 15:25:45.880591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.428 [2024-11-06 15:25:45.880641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.428 [2024-11-06 15:25:45.880655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.428 [2024-11-06 15:25:45.880669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.428 [2024-11-06 15:25:45.880680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.428 [2024-11-06 15:25:45.881955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.993 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:19.251 [2024-11-06 15:25:46.663930] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f341132f940) succeed. 00:21:19.251 [2024-11-06 15:25:46.673226] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f34111bd940) succeed. 00:21:19.251 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:21:19.251 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:21:19.251 15:25:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:19.509 Malloc1 00:21:19.509 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:19.767 Malloc2 00:21:19.767 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:20.025 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:20.283 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:20.283 [2024-11-06 15:25:47.898777] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:20.541 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:20.541 15:25:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4e85436c-a7eb-46ee-9b77-1668dba7fca6 -a 192.168.100.8 -s 4420 -i 4 00:21:20.798 15:25:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:20.798 15:25:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:21:20.798 15:25:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.798 15:25:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:21:20.798 15:25:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:22.701 [ 0]:0x1 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:22.701 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10eb5cf7da46421a94d9820f657e7a2b 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10eb5cf7da46421a94d9820f657e7a2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:22.959 [ 0]:0x1 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:22.959 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10eb5cf7da46421a94d9820f657e7a2b 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10eb5cf7da46421a94d9820f657e7a2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:23.217 [ 1]:0x2 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:23.217 15:25:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:23.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:23.475 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:23.733 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:23.992 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:23.992 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4e85436c-a7eb-46ee-9b77-1668dba7fca6 -a 192.168.100.8 -s 4420 -i 4 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:21:24.250 15:25:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:26.148 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:26.407 [ 0]:0x2 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.407 15:25:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:26.665 [ 0]:0x1 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10eb5cf7da46421a94d9820f657e7a2b 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10eb5cf7da46421a94d9820f657e7a2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:26.665 [ 1]:0x2 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.665 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:26.924 [ 0]:0x2 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:26.924 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:27.182 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:27.182 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:27.182 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:27.182 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:27.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:27.441 15:25:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:27.700 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:27.700 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4e85436c-a7eb-46ee-9b77-1668dba7fca6 -a 192.168.100.8 -s 4420 -i 4 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:21:27.958 15:25:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.859 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:29.859 [ 0]:0x1 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10eb5cf7da46421a94d9820f657e7a2b 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10eb5cf7da46421a94d9820f657e7a2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:30.117 [ 1]:0x2 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.117 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:30.376 [ 0]:0x2 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:21:30.376 15:25:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:30.634 [2024-11-06 15:25:58.091671] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:30.634 request: 00:21:30.634 { 00:21:30.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.634 "nsid": 2, 00:21:30.634 "host": "nqn.2016-06.io.spdk:host1", 00:21:30.634 "method": "nvmf_ns_remove_host", 00:21:30.634 "req_id": 1 00:21:30.634 } 00:21:30.634 Got JSON-RPC error response 00:21:30.634 response: 00:21:30.634 { 00:21:30.634 "code": -32602, 00:21:30.634 "message": "Invalid parameters" 00:21:30.634 } 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:30.634 [ 0]:0x2 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:30.634 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:30.635 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b07bc923ea48fe8fea48883a4daadc 00:21:30.635 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b07bc923ea48fe8fea48883a4daadc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:30.635 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:30.635 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:31.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3122876 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3122876 /var/tmp/host.sock 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3122876 ']' 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:31.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.202 15:25:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:31.202 [2024-11-06 15:25:58.664444] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:31.202 [2024-11-06 15:25:58.664548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122876 ] 00:21:31.202 [2024-11-06 15:25:58.809923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.460 [2024-11-06 15:25:58.922627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.412 15:25:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.412 15:25:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:21:32.412 15:25:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:32.412 15:25:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:32.716 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8f69b1f5-cdd0-4fb8-a747-04ca88c3cb69 00:21:32.716 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:32.716 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8F69B1F5CDD04FB8A74704CA88C3CB69 -i 00:21:33.025 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d0ef5fcf-47ae-48a7-9302-c83e2ffa8bc6 00:21:33.025 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:33.025 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D0EF5FCF47AE48A79302C83E2FFA8BC6 -i 00:21:33.025 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:33.284 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:33.543 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:33.543 15:26:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:33.801 nvme0n1 00:21:33.801 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:33.801 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:34.060 nvme1n2 00:21:34.060 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:34.060 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:34.060 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:34.060 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:34.060 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:34.319 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:34.320 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:34.320 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:34.320 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:34.578 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8f69b1f5-cdd0-4fb8-a747-04ca88c3cb69 == \8\f\6\9\b\1\f\5\-\c\d\d\0\-\4\f\b\8\-\a\7\4\7\-\0\4\c\a\8\8\c\3\c\b\6\9 ]] 00:21:34.578 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:34.578 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:34.578 15:26:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:34.578 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d0ef5fcf-47ae-48a7-9302-c83e2ffa8bc6 == \d\0\e\f\5\f\c\f\-\4\7\a\e\-\4\8\a\7\-\9\3\0\2\-\c\8\3\e\2\f\f\a\8\b\c\6 ]] 00:21:34.578 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.837 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8f69b1f5-cdd0-4fb8-a747-04ca88c3cb69 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F69B1F5CDD04FB8A74704CA88C3CB69 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F69B1F5CDD04FB8A74704CA88C3CB69 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:21:35.097 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8F69B1F5CDD04FB8A74704CA88C3CB69 00:21:35.357 [2024-11-06 15:26:02.767617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:21:35.357 [2024-11-06 15:26:02.767669] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:21:35.357 [2024-11-06 15:26:02.767685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:35.357 request: 00:21:35.357 { 00:21:35.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.357 "namespace": { 00:21:35.357 "bdev_name": "invalid", 00:21:35.357 "nsid": 1, 00:21:35.357 "nguid": "8F69B1F5CDD04FB8A74704CA88C3CB69", 00:21:35.357 "no_auto_visible": false 00:21:35.357 }, 00:21:35.357 "method": "nvmf_subsystem_add_ns", 00:21:35.357 "req_id": 1 00:21:35.357 } 00:21:35.357 Got JSON-RPC error response 00:21:35.357 response: 00:21:35.357 { 00:21:35.357 "code": -32602, 00:21:35.357 "message": "Invalid parameters" 00:21:35.357 } 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8f69b1f5-cdd0-4fb8-a747-04ca88c3cb69 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:35.357 15:26:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8F69B1F5CDD04FB8A74704CA88C3CB69 -i 00:21:35.616 15:26:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:21:37.521 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:21:37.521 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:21:37.522 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3122876 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3122876 ']' 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3122876 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3122876 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3122876' 00:21:37.781 killing process with pid 3122876 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3122876 00:21:37.781 15:26:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3122876 00:21:40.316 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.316 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:40.316 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:40.317 rmmod nvme_rdma 00:21:40.317 rmmod nvme_fabrics 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3121075 ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3121075 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3121075 ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3121075 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3121075 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3121075' 00:21:40.317 killing process with pid 3121075 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3121075 00:21:40.317 15:26:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3121075 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:42.222 00:21:42.222 real 0m31.125s 00:21:42.222 user 0m40.570s 00:21:42.222 sys 0m8.389s 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 ************************************ 00:21:42.222 END TEST nvmf_ns_masking 00:21:42.222 ************************************ 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:42.222 ************************************ 00:21:42.222 START TEST nvmf_nvme_cli 00:21:42.222 ************************************ 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:42.222 * Looking for test storage... 00:21:42.222 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.222 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.223 --rc genhtml_branch_coverage=1 00:21:42.223 --rc genhtml_function_coverage=1 00:21:42.223 --rc genhtml_legend=1 00:21:42.223 --rc geninfo_all_blocks=1 00:21:42.223 --rc geninfo_unexecuted_blocks=1 00:21:42.223 00:21:42.223 ' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.223 --rc genhtml_branch_coverage=1 00:21:42.223 --rc genhtml_function_coverage=1 00:21:42.223 --rc genhtml_legend=1 00:21:42.223 --rc geninfo_all_blocks=1 00:21:42.223 --rc geninfo_unexecuted_blocks=1 00:21:42.223 00:21:42.223 ' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.223 --rc genhtml_branch_coverage=1 00:21:42.223 --rc genhtml_function_coverage=1 00:21:42.223 --rc genhtml_legend=1 00:21:42.223 --rc geninfo_all_blocks=1 00:21:42.223 --rc geninfo_unexecuted_blocks=1 00:21:42.223 00:21:42.223 ' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.223 --rc genhtml_branch_coverage=1 00:21:42.223 --rc genhtml_function_coverage=1 00:21:42.223 --rc genhtml_legend=1 00:21:42.223 --rc geninfo_all_blocks=1 00:21:42.223 --rc geninfo_unexecuted_blocks=1 00:21:42.223 00:21:42.223 ' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.223 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.223 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.224 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.224 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.224 15:26:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:48.824 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:48.824 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:48.824 Found net devices under 0000:18:00.0: mlx_0_0 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:48.824 Found net devices under 0000:18:00.1: mlx_0_1 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:48.824 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:49.084 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:49.084 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:49.084 altname enp24s0f0np0 00:21:49.084 altname ens785f0np0 00:21:49.084 inet 192.168.100.8/24 scope global mlx_0_0 00:21:49.084 valid_lft forever preferred_lft forever 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:49.084 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:49.084 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:49.084 altname enp24s0f1np1 00:21:49.084 altname ens785f1np1 00:21:49.084 inet 192.168.100.9/24 scope global mlx_0_1 00:21:49.084 valid_lft forever preferred_lft forever 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:21:49.084 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:49.085 192.168.100.9' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:49.085 192.168.100.9' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:49.085 192.168.100.9' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3127276 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3127276 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3127276 ']' 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.085 15:26:16 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:49.344 [2024-11-06 15:26:16.753311] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:49.344 [2024-11-06 15:26:16.753416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.344 [2024-11-06 15:26:16.904736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.604 [2024-11-06 15:26:17.017543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.604 [2024-11-06 15:26:17.017597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.604 [2024-11-06 15:26:17.017611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.604 [2024-11-06 15:26:17.017625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.604 [2024-11-06 15:26:17.017636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.604 [2024-11-06 15:26:17.020004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.604 [2024-11-06 15:26:17.020094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.604 [2024-11-06 15:26:17.020160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.604 [2024-11-06 15:26:17.020184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.173 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 [2024-11-06 15:26:17.648357] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5d35b1d940) succeed. 00:21:50.173 [2024-11-06 15:26:17.657900] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5d351bd940) succeed. 00:21:50.432 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.432 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.432 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.432 15:26:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.432 Malloc0 00:21:50.432 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.432 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:50.432 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.432 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 Malloc1 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 [2024-11-06 15:26:18.122140] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:21:50.692 00:21:50.692 Discovery Log Number of Records 2, Generation counter 2 00:21:50.692 =====Discovery Log Entry 0====== 00:21:50.692 trtype: rdma 00:21:50.692 adrfam: ipv4 00:21:50.692 subtype: current discovery subsystem 00:21:50.692 treq: not required 00:21:50.692 portid: 0 00:21:50.692 trsvcid: 4420 00:21:50.692 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:50.692 traddr: 192.168.100.8 00:21:50.692 eflags: explicit discovery connections, duplicate discovery information 00:21:50.692 rdma_prtype: not specified 00:21:50.692 rdma_qptype: connected 00:21:50.692 rdma_cms: rdma-cm 00:21:50.692 rdma_pkey: 0x0000 00:21:50.692 =====Discovery Log Entry 1====== 00:21:50.692 trtype: rdma 00:21:50.692 adrfam: ipv4 00:21:50.692 subtype: nvme subsystem 00:21:50.692 treq: not required 00:21:50.692 portid: 0 00:21:50.692 trsvcid: 4420 00:21:50.692 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:50.692 traddr: 192.168.100.8 00:21:50.692 eflags: none 00:21:50.692 rdma_prtype: not specified 00:21:50.692 rdma_qptype: connected 00:21:50.692 rdma_cms: rdma-cm 00:21:50.692 rdma_pkey: 0x0000 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:50.692 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:50.693 15:26:18 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:21:51.630 15:26:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:21:54.165 /dev/nvme0n2 ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:54.165 15:26:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:54.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:54.734 rmmod nvme_rdma 00:21:54.734 rmmod nvme_fabrics 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3127276 ']' 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3127276 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3127276 ']' 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3127276 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.734 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3127276 00:21:54.994 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.994 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.994 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3127276' 00:21:54.994 killing process with pid 3127276 00:21:54.994 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3127276 00:21:54.994 15:26:22 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3127276 00:21:56.901 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.901 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:56.901 00:21:56.901 real 0m14.993s 00:21:56.901 user 0m30.048s 00:21:56.901 sys 0m6.198s 00:21:56.901 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:56.901 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:56.901 ************************************ 00:21:56.901 END TEST nvmf_nvme_cli 00:21:56.901 ************************************ 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.161 ************************************ 00:21:57.161 START TEST nvmf_auth_target 00:21:57.161 ************************************ 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:57.161 * Looking for test storage... 00:21:57.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:57.161 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.422 --rc genhtml_branch_coverage=1 00:21:57.422 --rc genhtml_function_coverage=1 00:21:57.422 --rc genhtml_legend=1 00:21:57.422 --rc geninfo_all_blocks=1 00:21:57.422 --rc geninfo_unexecuted_blocks=1 00:21:57.422 00:21:57.422 ' 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.422 --rc genhtml_branch_coverage=1 00:21:57.422 --rc genhtml_function_coverage=1 00:21:57.422 --rc genhtml_legend=1 00:21:57.422 --rc geninfo_all_blocks=1 00:21:57.422 --rc geninfo_unexecuted_blocks=1 00:21:57.422 00:21:57.422 ' 00:21:57.422 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.422 --rc genhtml_branch_coverage=1 00:21:57.422 --rc genhtml_function_coverage=1 00:21:57.422 --rc genhtml_legend=1 00:21:57.422 --rc geninfo_all_blocks=1 00:21:57.422 --rc geninfo_unexecuted_blocks=1 00:21:57.422 00:21:57.422 ' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:57.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.423 --rc genhtml_branch_coverage=1 00:21:57.423 --rc genhtml_function_coverage=1 00:21:57.423 --rc genhtml_legend=1 00:21:57.423 --rc geninfo_all_blocks=1 00:21:57.423 --rc geninfo_unexecuted_blocks=1 00:21:57.423 00:21:57.423 ' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.423 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.423 15:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:22:03.996 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:03.997 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:03.997 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:03.997 Found net devices under 0000:18:00.0: mlx_0_0 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:03.997 Found net devices under 0000:18:00.1: mlx_0_1 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:03.997 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:03.997 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:22:03.997 altname enp24s0f0np0 00:22:03.997 altname ens785f0np0 00:22:03.997 inet 192.168.100.8/24 scope global mlx_0_0 00:22:03.997 valid_lft forever preferred_lft forever 00:22:03.997 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:03.998 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:03.998 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:04.258 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:04.258 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:22:04.258 altname enp24s0f1np1 00:22:04.258 altname ens785f1np1 00:22:04.258 inet 192.168.100.9/24 scope global mlx_0_1 00:22:04.258 valid_lft forever preferred_lft forever 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:04.258 192.168.100.9' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:04.258 192.168.100.9' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:04.258 192.168.100.9' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3131304 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3131304 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3131304 ']' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:04.258 15:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3131351 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:05.194 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e654ecbdb7a0e67d995215a555fa4ef12bb5fbfbf818d751 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1r2 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e654ecbdb7a0e67d995215a555fa4ef12bb5fbfbf818d751 0 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e654ecbdb7a0e67d995215a555fa4ef12bb5fbfbf818d751 0 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e654ecbdb7a0e67d995215a555fa4ef12bb5fbfbf818d751 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1r2 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1r2 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1r2 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=75bc37d213f7544cfdc6033ebf29d206d7acc5d784b6a0168294c899b68d7c12 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cHw 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 75bc37d213f7544cfdc6033ebf29d206d7acc5d784b6a0168294c899b68d7c12 3 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 75bc37d213f7544cfdc6033ebf29d206d7acc5d784b6a0168294c899b68d7c12 3 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=75bc37d213f7544cfdc6033ebf29d206d7acc5d784b6a0168294c899b68d7c12 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:22:05.195 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cHw 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cHw 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.cHw 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c917140a2fe27a137413b5ce563c3069 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tkB 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c917140a2fe27a137413b5ce563c3069 1 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c917140a2fe27a137413b5ce563c3069 1 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c917140a2fe27a137413b5ce563c3069 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tkB 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tkB 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tkB 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9844def9a4fc0ec02371ef758f8a8e3fa3686911bb5e7bce 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.z6T 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9844def9a4fc0ec02371ef758f8a8e3fa3686911bb5e7bce 2 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9844def9a4fc0ec02371ef758f8a8e3fa3686911bb5e7bce 2 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9844def9a4fc0ec02371ef758f8a8e3fa3686911bb5e7bce 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.z6T 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.z6T 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.z6T 00:22:05.454 15:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bb3628523f3d712c127fdc88e38079520f457d714602c8b 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ny6 00:22:05.454 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bb3628523f3d712c127fdc88e38079520f457d714602c8b 2 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bb3628523f3d712c127fdc88e38079520f457d714602c8b 2 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bb3628523f3d712c127fdc88e38079520f457d714602c8b 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ny6 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ny6 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ny6 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7417e4a7176f7067236ebf53c8cecbd9 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aXq 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7417e4a7176f7067236ebf53c8cecbd9 1 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7417e4a7176f7067236ebf53c8cecbd9 1 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7417e4a7176f7067236ebf53c8cecbd9 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:22:05.455 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aXq 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aXq 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.aXq 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d5dd585460a9075e091528ace9359afbd8398a691df362b5565f088a32ad0334 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PwT 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d5dd585460a9075e091528ace9359afbd8398a691df362b5565f088a32ad0334 3 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d5dd585460a9075e091528ace9359afbd8398a691df362b5565f088a32ad0334 3 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d5dd585460a9075e091528ace9359afbd8398a691df362b5565f088a32ad0334 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PwT 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PwT 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.PwT 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3131304 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3131304 ']' 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.714 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3131351 /var/tmp/host.sock 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3131351 ']' 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:05.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.973 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.231 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:06.231 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:06.231 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:22:06.232 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.232 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1r2 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1r2 00:22:06.490 15:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1r2 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.cHw ]] 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cHw 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.748 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cHw 00:22:06.749 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cHw 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tkB 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tkB 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tkB 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.z6T ]] 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z6T 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z6T 00:22:07.007 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z6T 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ny6 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ny6 00:22:07.268 15:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ny6 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.aXq ]] 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aXq 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aXq 00:22:07.527 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aXq 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PwT 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PwT 00:22:07.785 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PwT 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.044 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.612 00:22:08.612 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.612 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.612 15:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.612 { 00:22:08.612 "cntlid": 1, 00:22:08.612 "qid": 0, 00:22:08.612 "state": "enabled", 00:22:08.612 "thread": "nvmf_tgt_poll_group_000", 00:22:08.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:08.612 "listen_address": { 00:22:08.612 "trtype": "RDMA", 00:22:08.612 "adrfam": "IPv4", 00:22:08.612 "traddr": "192.168.100.8", 00:22:08.612 "trsvcid": "4420" 00:22:08.612 }, 00:22:08.612 "peer_address": { 00:22:08.612 "trtype": "RDMA", 00:22:08.612 "adrfam": "IPv4", 00:22:08.612 "traddr": "192.168.100.8", 00:22:08.612 "trsvcid": "44937" 00:22:08.612 }, 00:22:08.612 "auth": { 00:22:08.612 "state": "completed", 00:22:08.612 "digest": "sha256", 00:22:08.612 "dhgroup": "null" 00:22:08.612 } 00:22:08.612 } 00:22:08.612 ]' 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.612 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.870 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:08.870 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.870 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.870 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.870 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.128 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:09.128 15:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:09.695 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.953 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.212 00:22:10.212 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.212 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.212 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.470 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.470 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.470 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.470 15:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.470 { 00:22:10.470 "cntlid": 3, 00:22:10.470 "qid": 0, 00:22:10.470 "state": "enabled", 00:22:10.470 "thread": "nvmf_tgt_poll_group_000", 00:22:10.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:10.470 "listen_address": { 00:22:10.470 "trtype": "RDMA", 00:22:10.470 "adrfam": "IPv4", 00:22:10.470 "traddr": "192.168.100.8", 00:22:10.470 "trsvcid": "4420" 00:22:10.470 }, 00:22:10.470 "peer_address": { 00:22:10.470 "trtype": "RDMA", 00:22:10.470 "adrfam": "IPv4", 00:22:10.470 "traddr": "192.168.100.8", 00:22:10.470 "trsvcid": "54519" 00:22:10.470 }, 00:22:10.470 "auth": { 00:22:10.470 "state": "completed", 00:22:10.470 "digest": "sha256", 00:22:10.470 "dhgroup": "null" 00:22:10.470 } 00:22:10.470 } 00:22:10.470 ]' 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.470 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:10.728 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.728 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.728 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.728 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.987 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:10.987 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:11.553 15:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.553 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:11.553 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.554 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.554 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.554 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.554 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:11.554 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.812 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.813 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.813 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.813 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.070 00:22:12.070 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.070 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.070 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.328 { 00:22:12.328 "cntlid": 5, 00:22:12.328 "qid": 0, 00:22:12.328 "state": "enabled", 00:22:12.328 "thread": "nvmf_tgt_poll_group_000", 00:22:12.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:12.328 "listen_address": { 00:22:12.328 "trtype": "RDMA", 00:22:12.328 "adrfam": "IPv4", 00:22:12.328 "traddr": "192.168.100.8", 00:22:12.328 "trsvcid": "4420" 00:22:12.328 }, 00:22:12.328 "peer_address": { 00:22:12.328 "trtype": "RDMA", 00:22:12.328 "adrfam": "IPv4", 00:22:12.328 "traddr": "192.168.100.8", 00:22:12.328 "trsvcid": "42886" 00:22:12.328 }, 00:22:12.328 "auth": { 00:22:12.328 "state": "completed", 00:22:12.328 "digest": "sha256", 00:22:12.328 "dhgroup": "null" 00:22:12.328 } 00:22:12.328 } 00:22:12.328 ]' 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.328 15:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.587 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:12.587 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:13.521 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.521 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:13.521 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.521 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.522 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.522 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:13.522 15:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.522 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.781 00:22:13.781 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.781 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.781 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.039 { 00:22:14.039 "cntlid": 7, 00:22:14.039 "qid": 0, 00:22:14.039 "state": "enabled", 00:22:14.039 "thread": "nvmf_tgt_poll_group_000", 00:22:14.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:14.039 "listen_address": { 00:22:14.039 "trtype": "RDMA", 00:22:14.039 "adrfam": "IPv4", 00:22:14.039 "traddr": "192.168.100.8", 00:22:14.039 "trsvcid": "4420" 00:22:14.039 }, 00:22:14.039 "peer_address": { 00:22:14.039 "trtype": "RDMA", 00:22:14.039 "adrfam": "IPv4", 00:22:14.039 "traddr": "192.168.100.8", 00:22:14.039 "trsvcid": "45593" 00:22:14.039 }, 00:22:14.039 "auth": { 00:22:14.039 "state": "completed", 00:22:14.039 "digest": "sha256", 00:22:14.039 "dhgroup": "null" 00:22:14.039 } 00:22:14.039 } 00:22:14.039 ]' 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.039 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.297 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:14.297 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.297 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.297 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.297 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.555 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:14.555 15:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:15.121 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.380 15:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.639 00:22:15.639 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.639 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.639 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.897 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.897 { 00:22:15.897 "cntlid": 9, 00:22:15.897 "qid": 0, 00:22:15.897 "state": "enabled", 00:22:15.897 "thread": "nvmf_tgt_poll_group_000", 00:22:15.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:15.898 "listen_address": { 00:22:15.898 "trtype": "RDMA", 00:22:15.898 "adrfam": "IPv4", 00:22:15.898 "traddr": "192.168.100.8", 00:22:15.898 "trsvcid": "4420" 00:22:15.898 }, 00:22:15.898 "peer_address": { 00:22:15.898 "trtype": "RDMA", 00:22:15.898 "adrfam": "IPv4", 00:22:15.898 "traddr": "192.168.100.8", 00:22:15.898 "trsvcid": "40673" 00:22:15.898 }, 00:22:15.898 "auth": { 00:22:15.898 "state": "completed", 00:22:15.898 "digest": "sha256", 00:22:15.898 "dhgroup": "ffdhe2048" 00:22:15.898 } 00:22:15.898 } 00:22:15.898 ]' 00:22:15.898 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.898 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.898 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.898 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:15.898 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.156 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.156 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.156 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.156 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:16.156 15:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:17.091 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.349 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.350 15:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.608 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.608 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.866 { 00:22:17.866 "cntlid": 11, 00:22:17.866 "qid": 0, 00:22:17.866 "state": "enabled", 00:22:17.866 "thread": "nvmf_tgt_poll_group_000", 00:22:17.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:17.866 "listen_address": { 00:22:17.866 "trtype": "RDMA", 00:22:17.866 "adrfam": "IPv4", 00:22:17.866 "traddr": "192.168.100.8", 00:22:17.866 "trsvcid": "4420" 00:22:17.866 }, 00:22:17.866 "peer_address": { 00:22:17.866 "trtype": "RDMA", 00:22:17.866 "adrfam": "IPv4", 00:22:17.866 "traddr": "192.168.100.8", 00:22:17.866 "trsvcid": "54490" 00:22:17.866 }, 00:22:17.866 "auth": { 00:22:17.866 "state": "completed", 00:22:17.866 "digest": "sha256", 00:22:17.866 "dhgroup": "ffdhe2048" 00:22:17.866 } 00:22:17.866 } 00:22:17.866 ]' 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.866 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.125 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:18.125 15:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:18.691 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.951 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.210 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.210 15:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.468 { 00:22:19.468 "cntlid": 13, 00:22:19.468 "qid": 0, 00:22:19.468 "state": "enabled", 00:22:19.468 "thread": "nvmf_tgt_poll_group_000", 00:22:19.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:19.468 "listen_address": { 00:22:19.468 "trtype": "RDMA", 00:22:19.468 "adrfam": "IPv4", 00:22:19.468 "traddr": "192.168.100.8", 00:22:19.468 "trsvcid": "4420" 00:22:19.468 }, 00:22:19.468 "peer_address": { 00:22:19.468 "trtype": "RDMA", 00:22:19.468 "adrfam": "IPv4", 00:22:19.468 "traddr": "192.168.100.8", 00:22:19.468 "trsvcid": "34349" 00:22:19.468 }, 00:22:19.468 "auth": { 00:22:19.468 "state": "completed", 00:22:19.468 "digest": "sha256", 00:22:19.468 "dhgroup": "ffdhe2048" 00:22:19.468 } 00:22:19.468 } 00:22:19.468 ]' 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.468 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.726 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.984 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:19.984 15:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.551 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.810 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.068 00:22:21.068 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.068 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.068 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.326 { 00:22:21.326 "cntlid": 15, 00:22:21.326 "qid": 0, 00:22:21.326 "state": "enabled", 00:22:21.326 "thread": "nvmf_tgt_poll_group_000", 00:22:21.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:21.326 "listen_address": { 00:22:21.326 "trtype": "RDMA", 00:22:21.326 "adrfam": "IPv4", 00:22:21.326 "traddr": "192.168.100.8", 00:22:21.326 "trsvcid": "4420" 00:22:21.326 }, 00:22:21.326 "peer_address": { 00:22:21.326 "trtype": "RDMA", 00:22:21.326 "adrfam": "IPv4", 00:22:21.326 "traddr": "192.168.100.8", 00:22:21.326 "trsvcid": "34689" 00:22:21.326 }, 00:22:21.326 "auth": { 00:22:21.326 "state": "completed", 00:22:21.326 "digest": "sha256", 00:22:21.326 "dhgroup": "ffdhe2048" 00:22:21.326 } 00:22:21.326 } 00:22:21.326 ]' 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.326 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.327 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.327 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.327 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.327 15:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.615 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:21.615 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:22.241 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.541 15:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.541 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.802 00:22:22.802 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.802 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.802 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.060 { 00:22:23.060 "cntlid": 17, 00:22:23.060 "qid": 0, 00:22:23.060 "state": "enabled", 00:22:23.060 "thread": "nvmf_tgt_poll_group_000", 00:22:23.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:23.060 "listen_address": { 00:22:23.060 "trtype": "RDMA", 00:22:23.060 "adrfam": "IPv4", 00:22:23.060 "traddr": "192.168.100.8", 00:22:23.060 "trsvcid": "4420" 00:22:23.060 }, 00:22:23.060 "peer_address": { 00:22:23.060 "trtype": "RDMA", 00:22:23.060 "adrfam": "IPv4", 00:22:23.060 "traddr": "192.168.100.8", 00:22:23.060 "trsvcid": "41686" 00:22:23.060 }, 00:22:23.060 "auth": { 00:22:23.060 "state": "completed", 00:22:23.060 "digest": "sha256", 00:22:23.060 "dhgroup": "ffdhe3072" 00:22:23.060 } 00:22:23.060 } 00:22:23.060 ]' 00:22:23.060 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.318 15:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.576 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:23.576 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:24.142 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.142 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:24.142 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.142 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:24.400 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.401 15:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.659 00:22:24.659 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.659 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.659 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.916 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.916 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.916 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.916 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.917 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.917 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.917 { 00:22:24.917 "cntlid": 19, 00:22:24.917 "qid": 0, 00:22:24.917 "state": "enabled", 00:22:24.917 "thread": "nvmf_tgt_poll_group_000", 00:22:24.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:24.917 "listen_address": { 00:22:24.917 "trtype": "RDMA", 00:22:24.917 "adrfam": "IPv4", 00:22:24.917 "traddr": "192.168.100.8", 00:22:24.917 "trsvcid": "4420" 00:22:24.917 }, 00:22:24.917 "peer_address": { 00:22:24.917 "trtype": "RDMA", 00:22:24.917 "adrfam": "IPv4", 00:22:24.917 "traddr": "192.168.100.8", 00:22:24.917 "trsvcid": "45152" 00:22:24.917 }, 00:22:24.917 "auth": { 00:22:24.917 "state": "completed", 00:22:24.917 "digest": "sha256", 00:22:24.917 "dhgroup": "ffdhe3072" 00:22:24.917 } 00:22:24.917 } 00:22:24.917 ]' 00:22:24.917 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.917 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:24.917 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.175 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:25.175 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.175 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.175 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.175 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.432 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:25.432 15:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:25.998 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.256 15:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.514 00:22:26.514 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.514 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.514 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.773 { 00:22:26.773 "cntlid": 21, 00:22:26.773 "qid": 0, 00:22:26.773 "state": "enabled", 00:22:26.773 "thread": "nvmf_tgt_poll_group_000", 00:22:26.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:26.773 "listen_address": { 00:22:26.773 "trtype": "RDMA", 00:22:26.773 "adrfam": "IPv4", 00:22:26.773 "traddr": "192.168.100.8", 00:22:26.773 "trsvcid": "4420" 00:22:26.773 }, 00:22:26.773 "peer_address": { 00:22:26.773 "trtype": "RDMA", 00:22:26.773 "adrfam": "IPv4", 00:22:26.773 "traddr": "192.168.100.8", 00:22:26.773 "trsvcid": "49007" 00:22:26.773 }, 00:22:26.773 "auth": { 00:22:26.773 "state": "completed", 00:22:26.773 "digest": "sha256", 00:22:26.773 "dhgroup": "ffdhe3072" 00:22:26.773 } 00:22:26.773 } 00:22:26.773 ]' 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.773 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.031 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.031 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.031 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.031 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:27.031 15:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:27.964 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.223 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.480 00:22:28.480 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.480 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.480 15:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.738 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.738 { 00:22:28.738 "cntlid": 23, 00:22:28.738 "qid": 0, 00:22:28.738 "state": "enabled", 00:22:28.738 "thread": "nvmf_tgt_poll_group_000", 00:22:28.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:28.739 "listen_address": { 00:22:28.739 "trtype": "RDMA", 00:22:28.739 "adrfam": "IPv4", 00:22:28.739 "traddr": "192.168.100.8", 00:22:28.739 "trsvcid": "4420" 00:22:28.739 }, 00:22:28.739 "peer_address": { 00:22:28.739 "trtype": "RDMA", 00:22:28.739 "adrfam": "IPv4", 00:22:28.739 "traddr": "192.168.100.8", 00:22:28.739 "trsvcid": "57382" 00:22:28.739 }, 00:22:28.739 "auth": { 00:22:28.739 "state": "completed", 00:22:28.739 "digest": "sha256", 00:22:28.739 "dhgroup": "ffdhe3072" 00:22:28.739 } 00:22:28.739 } 00:22:28.739 ]' 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.739 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.996 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:28.996 15:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:29.563 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:29.821 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.079 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.338 00:22:30.338 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.338 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.338 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.596 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.596 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.596 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.596 15:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.596 { 00:22:30.596 "cntlid": 25, 00:22:30.596 "qid": 0, 00:22:30.596 "state": "enabled", 00:22:30.596 "thread": "nvmf_tgt_poll_group_000", 00:22:30.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:30.596 "listen_address": { 00:22:30.596 "trtype": "RDMA", 00:22:30.596 "adrfam": "IPv4", 00:22:30.596 "traddr": "192.168.100.8", 00:22:30.596 "trsvcid": "4420" 00:22:30.596 }, 00:22:30.596 "peer_address": { 00:22:30.596 "trtype": "RDMA", 00:22:30.596 "adrfam": "IPv4", 00:22:30.596 "traddr": "192.168.100.8", 00:22:30.596 "trsvcid": "44348" 00:22:30.596 }, 00:22:30.596 "auth": { 00:22:30.596 "state": "completed", 00:22:30.596 "digest": "sha256", 00:22:30.596 "dhgroup": "ffdhe4096" 00:22:30.596 } 00:22:30.596 } 00:22:30.596 ]' 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.596 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.854 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:30.855 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:31.421 15:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.679 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.680 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.680 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.680 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.245 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.245 { 00:22:32.245 "cntlid": 27, 00:22:32.245 "qid": 0, 00:22:32.245 "state": "enabled", 00:22:32.245 "thread": "nvmf_tgt_poll_group_000", 00:22:32.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:32.245 "listen_address": { 00:22:32.245 "trtype": "RDMA", 00:22:32.245 "adrfam": "IPv4", 00:22:32.245 "traddr": "192.168.100.8", 00:22:32.245 "trsvcid": "4420" 00:22:32.245 }, 00:22:32.245 "peer_address": { 00:22:32.245 "trtype": "RDMA", 00:22:32.245 "adrfam": "IPv4", 00:22:32.245 "traddr": "192.168.100.8", 00:22:32.245 "trsvcid": "41534" 00:22:32.245 }, 00:22:32.245 "auth": { 00:22:32.245 "state": "completed", 00:22:32.245 "digest": "sha256", 00:22:32.245 "dhgroup": "ffdhe4096" 00:22:32.245 } 00:22:32.245 } 00:22:32.245 ]' 00:22:32.245 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.503 15:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.761 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:32.761 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.328 15:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.586 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.844 00:22:33.844 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.844 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.844 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.102 { 00:22:34.102 "cntlid": 29, 00:22:34.102 "qid": 0, 00:22:34.102 "state": "enabled", 00:22:34.102 "thread": "nvmf_tgt_poll_group_000", 00:22:34.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:34.102 "listen_address": { 00:22:34.102 "trtype": "RDMA", 00:22:34.102 "adrfam": "IPv4", 00:22:34.102 "traddr": "192.168.100.8", 00:22:34.102 "trsvcid": "4420" 00:22:34.102 }, 00:22:34.102 "peer_address": { 00:22:34.102 "trtype": "RDMA", 00:22:34.102 "adrfam": "IPv4", 00:22:34.102 "traddr": "192.168.100.8", 00:22:34.102 "trsvcid": "47136" 00:22:34.102 }, 00:22:34.102 "auth": { 00:22:34.102 "state": "completed", 00:22:34.102 "digest": "sha256", 00:22:34.102 "dhgroup": "ffdhe4096" 00:22:34.102 } 00:22:34.102 } 00:22:34.102 ]' 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:34.102 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.361 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:34.361 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.361 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.361 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.361 15:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.619 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:34.619 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:35.185 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.185 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:35.185 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.185 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.443 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.443 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.443 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:35.443 15:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.443 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.702 00:22:35.960 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.960 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.961 { 00:22:35.961 "cntlid": 31, 00:22:35.961 "qid": 0, 00:22:35.961 "state": "enabled", 00:22:35.961 "thread": "nvmf_tgt_poll_group_000", 00:22:35.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:35.961 "listen_address": { 00:22:35.961 "trtype": "RDMA", 00:22:35.961 "adrfam": "IPv4", 00:22:35.961 "traddr": "192.168.100.8", 00:22:35.961 "trsvcid": "4420" 00:22:35.961 }, 00:22:35.961 "peer_address": { 00:22:35.961 "trtype": "RDMA", 00:22:35.961 "adrfam": "IPv4", 00:22:35.961 "traddr": "192.168.100.8", 00:22:35.961 "trsvcid": "40368" 00:22:35.961 }, 00:22:35.961 "auth": { 00:22:35.961 "state": "completed", 00:22:35.961 "digest": "sha256", 00:22:35.961 "dhgroup": "ffdhe4096" 00:22:35.961 } 00:22:35.961 } 00:22:35.961 ]' 00:22:35.961 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.219 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.478 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:36.478 15:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:37.046 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.305 15:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.872 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.872 { 00:22:37.872 "cntlid": 33, 00:22:37.872 "qid": 0, 00:22:37.872 "state": "enabled", 00:22:37.872 "thread": "nvmf_tgt_poll_group_000", 00:22:37.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:37.872 "listen_address": { 00:22:37.872 "trtype": "RDMA", 00:22:37.872 "adrfam": "IPv4", 00:22:37.872 "traddr": "192.168.100.8", 00:22:37.872 "trsvcid": "4420" 00:22:37.872 }, 00:22:37.872 "peer_address": { 00:22:37.872 "trtype": "RDMA", 00:22:37.872 "adrfam": "IPv4", 00:22:37.872 "traddr": "192.168.100.8", 00:22:37.872 "trsvcid": "53162" 00:22:37.872 }, 00:22:37.872 "auth": { 00:22:37.872 "state": "completed", 00:22:37.872 "digest": "sha256", 00:22:37.872 "dhgroup": "ffdhe6144" 00:22:37.872 } 00:22:37.872 } 00:22:37.872 ]' 00:22:37.872 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.130 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.388 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:38.388 15:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:38.953 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.212 15:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.779 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.779 { 00:22:39.779 "cntlid": 35, 00:22:39.779 "qid": 0, 00:22:39.779 "state": "enabled", 00:22:39.779 "thread": "nvmf_tgt_poll_group_000", 00:22:39.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:39.779 "listen_address": { 00:22:39.779 "trtype": "RDMA", 00:22:39.779 "adrfam": "IPv4", 00:22:39.779 "traddr": "192.168.100.8", 00:22:39.779 "trsvcid": "4420" 00:22:39.779 }, 00:22:39.779 "peer_address": { 00:22:39.779 "trtype": "RDMA", 00:22:39.779 "adrfam": "IPv4", 00:22:39.779 "traddr": "192.168.100.8", 00:22:39.779 "trsvcid": "55773" 00:22:39.779 }, 00:22:39.779 "auth": { 00:22:39.779 "state": "completed", 00:22:39.779 "digest": "sha256", 00:22:39.779 "dhgroup": "ffdhe6144" 00:22:39.779 } 00:22:39.779 } 00:22:39.779 ]' 00:22:39.779 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.037 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.295 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:40.295 15:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:40.859 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.117 15:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.683 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.683 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.941 { 00:22:41.941 "cntlid": 37, 00:22:41.941 "qid": 0, 00:22:41.941 "state": "enabled", 00:22:41.941 "thread": "nvmf_tgt_poll_group_000", 00:22:41.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:41.941 "listen_address": { 00:22:41.941 "trtype": "RDMA", 00:22:41.941 "adrfam": "IPv4", 00:22:41.941 "traddr": "192.168.100.8", 00:22:41.941 "trsvcid": "4420" 00:22:41.941 }, 00:22:41.941 "peer_address": { 00:22:41.941 "trtype": "RDMA", 00:22:41.941 "adrfam": "IPv4", 00:22:41.941 "traddr": "192.168.100.8", 00:22:41.941 "trsvcid": "41568" 00:22:41.941 }, 00:22:41.941 "auth": { 00:22:41.941 "state": "completed", 00:22:41.941 "digest": "sha256", 00:22:41.941 "dhgroup": "ffdhe6144" 00:22:41.941 } 00:22:41.941 } 00:22:41.941 ]' 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.941 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.198 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:42.199 15:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:42.764 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.022 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.280 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.280 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:43.280 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.280 15:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.538 00:22:43.538 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.538 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.538 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.796 { 00:22:43.796 "cntlid": 39, 00:22:43.796 "qid": 0, 00:22:43.796 "state": "enabled", 00:22:43.796 "thread": "nvmf_tgt_poll_group_000", 00:22:43.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:43.796 "listen_address": { 00:22:43.796 "trtype": "RDMA", 00:22:43.796 "adrfam": "IPv4", 00:22:43.796 "traddr": "192.168.100.8", 00:22:43.796 "trsvcid": "4420" 00:22:43.796 }, 00:22:43.796 "peer_address": { 00:22:43.796 "trtype": "RDMA", 00:22:43.796 "adrfam": "IPv4", 00:22:43.796 "traddr": "192.168.100.8", 00:22:43.796 "trsvcid": "57130" 00:22:43.796 }, 00:22:43.796 "auth": { 00:22:43.796 "state": "completed", 00:22:43.796 "digest": "sha256", 00:22:43.796 "dhgroup": "ffdhe6144" 00:22:43.796 } 00:22:43.796 } 00:22:43.796 ]' 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.796 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.054 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:44.054 15:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:44.620 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:44.878 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.137 15:27:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.723 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.723 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.723 { 00:22:45.723 "cntlid": 41, 00:22:45.723 "qid": 0, 00:22:45.723 "state": "enabled", 00:22:45.723 "thread": "nvmf_tgt_poll_group_000", 00:22:45.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:45.723 "listen_address": { 00:22:45.723 "trtype": "RDMA", 00:22:45.723 "adrfam": "IPv4", 00:22:45.723 "traddr": "192.168.100.8", 00:22:45.723 "trsvcid": "4420" 00:22:45.723 }, 00:22:45.723 "peer_address": { 00:22:45.723 "trtype": "RDMA", 00:22:45.723 "adrfam": "IPv4", 00:22:45.724 "traddr": "192.168.100.8", 00:22:45.724 "trsvcid": "57175" 00:22:45.724 }, 00:22:45.724 "auth": { 00:22:45.724 "state": "completed", 00:22:45.724 "digest": "sha256", 00:22:45.724 "dhgroup": "ffdhe8192" 00:22:45.724 } 00:22:45.724 } 00:22:45.724 ]' 00:22:45.724 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.724 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.724 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.992 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.992 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.992 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.992 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.993 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.993 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:45.993 15:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:46.945 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.203 15:27:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.461 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.719 { 00:22:47.719 "cntlid": 43, 00:22:47.719 "qid": 0, 00:22:47.719 "state": "enabled", 00:22:47.719 "thread": "nvmf_tgt_poll_group_000", 00:22:47.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:47.719 "listen_address": { 00:22:47.719 "trtype": "RDMA", 00:22:47.719 "adrfam": "IPv4", 00:22:47.719 "traddr": "192.168.100.8", 00:22:47.719 "trsvcid": "4420" 00:22:47.719 }, 00:22:47.719 "peer_address": { 00:22:47.719 "trtype": "RDMA", 00:22:47.719 "adrfam": "IPv4", 00:22:47.719 "traddr": "192.168.100.8", 00:22:47.719 "trsvcid": "55688" 00:22:47.719 }, 00:22:47.719 "auth": { 00:22:47.719 "state": "completed", 00:22:47.719 "digest": "sha256", 00:22:47.719 "dhgroup": "ffdhe8192" 00:22:47.719 } 00:22:47.719 } 00:22:47.719 ]' 00:22:47.719 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.976 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.977 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.235 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:48.235 15:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:48.801 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.060 15:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.628 00:22:49.628 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.628 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.628 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.886 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.886 { 00:22:49.886 "cntlid": 45, 00:22:49.886 "qid": 0, 00:22:49.886 "state": "enabled", 00:22:49.886 "thread": "nvmf_tgt_poll_group_000", 00:22:49.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:49.886 "listen_address": { 00:22:49.886 "trtype": "RDMA", 00:22:49.886 "adrfam": "IPv4", 00:22:49.886 "traddr": "192.168.100.8", 00:22:49.886 "trsvcid": "4420" 00:22:49.886 }, 00:22:49.886 "peer_address": { 00:22:49.886 "trtype": "RDMA", 00:22:49.886 "adrfam": "IPv4", 00:22:49.886 "traddr": "192.168.100.8", 00:22:49.886 "trsvcid": "48767" 00:22:49.886 }, 00:22:49.886 "auth": { 00:22:49.886 "state": "completed", 00:22:49.886 "digest": "sha256", 00:22:49.886 "dhgroup": "ffdhe8192" 00:22:49.886 } 00:22:49.886 } 00:22:49.886 ]' 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.887 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.145 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:50.145 15:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:50.711 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:50.969 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.227 15:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.793 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.793 { 00:22:51.793 "cntlid": 47, 00:22:51.793 "qid": 0, 00:22:51.793 "state": "enabled", 00:22:51.793 "thread": "nvmf_tgt_poll_group_000", 00:22:51.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:51.793 "listen_address": { 00:22:51.793 "trtype": "RDMA", 00:22:51.793 "adrfam": "IPv4", 00:22:51.793 "traddr": "192.168.100.8", 00:22:51.793 "trsvcid": "4420" 00:22:51.793 }, 00:22:51.793 "peer_address": { 00:22:51.793 "trtype": "RDMA", 00:22:51.793 "adrfam": "IPv4", 00:22:51.793 "traddr": "192.168.100.8", 00:22:51.793 "trsvcid": "55610" 00:22:51.793 }, 00:22:51.793 "auth": { 00:22:51.793 "state": "completed", 00:22:51.793 "digest": "sha256", 00:22:51.793 "dhgroup": "ffdhe8192" 00:22:51.793 } 00:22:51.793 } 00:22:51.793 ]' 00:22:51.793 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.053 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.311 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:52.311 15:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:52.877 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.877 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:52.877 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.877 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.136 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.394 00:22:53.394 15:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.394 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.395 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.653 { 00:22:53.653 "cntlid": 49, 00:22:53.653 "qid": 0, 00:22:53.653 "state": "enabled", 00:22:53.653 "thread": "nvmf_tgt_poll_group_000", 00:22:53.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:53.653 "listen_address": { 00:22:53.653 "trtype": "RDMA", 00:22:53.653 "adrfam": "IPv4", 00:22:53.653 "traddr": "192.168.100.8", 00:22:53.653 "trsvcid": "4420" 00:22:53.653 }, 00:22:53.653 "peer_address": { 00:22:53.653 "trtype": "RDMA", 00:22:53.653 "adrfam": "IPv4", 00:22:53.653 "traddr": "192.168.100.8", 00:22:53.653 "trsvcid": "42372" 00:22:53.653 }, 00:22:53.653 "auth": { 00:22:53.653 "state": "completed", 00:22:53.653 "digest": "sha384", 00:22:53.653 "dhgroup": "null" 00:22:53.653 } 00:22:53.653 } 00:22:53.653 ]' 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.653 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.912 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:53.912 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.912 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.912 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.912 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.170 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:54.170 15:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:54.736 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.995 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.253 00:22:55.253 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.253 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.253 15:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.511 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.511 { 00:22:55.511 "cntlid": 51, 00:22:55.511 "qid": 0, 00:22:55.512 "state": "enabled", 00:22:55.512 "thread": "nvmf_tgt_poll_group_000", 00:22:55.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:55.512 "listen_address": { 00:22:55.512 "trtype": "RDMA", 00:22:55.512 "adrfam": "IPv4", 00:22:55.512 "traddr": "192.168.100.8", 00:22:55.512 "trsvcid": "4420" 00:22:55.512 }, 00:22:55.512 "peer_address": { 00:22:55.512 "trtype": "RDMA", 00:22:55.512 "adrfam": "IPv4", 00:22:55.512 "traddr": "192.168.100.8", 00:22:55.512 "trsvcid": "48219" 00:22:55.512 }, 00:22:55.512 "auth": { 00:22:55.512 "state": "completed", 00:22:55.512 "digest": "sha384", 00:22:55.512 "dhgroup": "null" 00:22:55.512 } 00:22:55.512 } 00:22:55.512 ]' 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.512 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.770 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:55.770 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:22:56.704 15:27:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.704 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.963 00:22:56.963 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.221 { 00:22:57.221 "cntlid": 53, 00:22:57.221 "qid": 0, 00:22:57.221 "state": "enabled", 00:22:57.221 "thread": "nvmf_tgt_poll_group_000", 00:22:57.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:57.221 "listen_address": { 00:22:57.221 "trtype": "RDMA", 00:22:57.221 "adrfam": "IPv4", 00:22:57.221 "traddr": "192.168.100.8", 00:22:57.221 "trsvcid": "4420" 00:22:57.221 }, 00:22:57.221 "peer_address": { 00:22:57.221 "trtype": "RDMA", 00:22:57.221 "adrfam": "IPv4", 00:22:57.221 "traddr": "192.168.100.8", 00:22:57.221 "trsvcid": "59826" 00:22:57.221 }, 00:22:57.221 "auth": { 00:22:57.221 "state": "completed", 00:22:57.221 "digest": "sha384", 00:22:57.221 "dhgroup": "null" 00:22:57.221 } 00:22:57.221 } 00:22:57.221 ]' 00:22:57.221 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.479 15:27:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.738 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:57.738 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:22:58.304 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.304 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:58.304 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.304 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.562 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.562 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.562 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:58.562 15:27:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.563 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.822 00:22:58.822 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.822 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.822 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.080 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.080 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.080 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.080 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.080 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.081 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.081 { 00:22:59.081 "cntlid": 55, 00:22:59.081 "qid": 0, 00:22:59.081 "state": "enabled", 00:22:59.081 "thread": "nvmf_tgt_poll_group_000", 00:22:59.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:22:59.081 "listen_address": { 00:22:59.081 "trtype": "RDMA", 00:22:59.081 "adrfam": "IPv4", 00:22:59.081 "traddr": "192.168.100.8", 00:22:59.081 "trsvcid": "4420" 00:22:59.081 }, 00:22:59.081 "peer_address": { 00:22:59.081 "trtype": "RDMA", 00:22:59.081 "adrfam": "IPv4", 00:22:59.081 "traddr": "192.168.100.8", 00:22:59.081 "trsvcid": "57508" 00:22:59.081 }, 00:22:59.081 "auth": { 00:22:59.081 "state": "completed", 00:22:59.081 "digest": "sha384", 00:22:59.081 "dhgroup": "null" 00:22:59.081 } 00:22:59.081 } 00:22:59.081 ]' 00:22:59.081 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.081 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.081 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.339 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:59.339 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.339 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.339 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.339 15:27:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.598 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:22:59.598 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.165 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.423 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:00.423 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.423 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.423 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:00.423 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.424 15:27:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.684 00:23:00.684 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.684 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.684 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.945 { 00:23:00.945 "cntlid": 57, 00:23:00.945 "qid": 0, 00:23:00.945 "state": "enabled", 00:23:00.945 "thread": "nvmf_tgt_poll_group_000", 00:23:00.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:00.945 "listen_address": { 00:23:00.945 "trtype": "RDMA", 00:23:00.945 "adrfam": "IPv4", 00:23:00.945 "traddr": "192.168.100.8", 00:23:00.945 "trsvcid": "4420" 00:23:00.945 }, 00:23:00.945 "peer_address": { 00:23:00.945 "trtype": "RDMA", 00:23:00.945 "adrfam": "IPv4", 00:23:00.945 "traddr": "192.168.100.8", 00:23:00.945 "trsvcid": "33805" 00:23:00.945 }, 00:23:00.945 "auth": { 00:23:00.945 "state": "completed", 00:23:00.945 "digest": "sha384", 00:23:00.945 "dhgroup": "ffdhe2048" 00:23:00.945 } 00:23:00.945 } 00:23:00.945 ]' 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.945 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.203 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:01.203 15:27:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:01.770 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:02.029 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.287 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.545 00:23:02.545 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.545 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.545 15:27:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.803 { 00:23:02.803 "cntlid": 59, 00:23:02.803 "qid": 0, 00:23:02.803 "state": "enabled", 00:23:02.803 "thread": "nvmf_tgt_poll_group_000", 00:23:02.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:02.803 "listen_address": { 00:23:02.803 "trtype": "RDMA", 00:23:02.803 "adrfam": "IPv4", 00:23:02.803 "traddr": "192.168.100.8", 00:23:02.803 "trsvcid": "4420" 00:23:02.803 }, 00:23:02.803 "peer_address": { 00:23:02.803 "trtype": "RDMA", 00:23:02.803 "adrfam": "IPv4", 00:23:02.803 "traddr": "192.168.100.8", 00:23:02.803 "trsvcid": "43014" 00:23:02.803 }, 00:23:02.803 "auth": { 00:23:02.803 "state": "completed", 00:23:02.803 "digest": "sha384", 00:23:02.803 "dhgroup": "ffdhe2048" 00:23:02.803 } 00:23:02.803 } 00:23:02.803 ]' 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.803 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.062 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:03.062 15:27:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:03.628 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.886 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.145 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.145 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.145 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.145 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.145 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.403 15:27:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.403 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.403 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.403 { 00:23:04.403 "cntlid": 61, 00:23:04.403 "qid": 0, 00:23:04.403 "state": "enabled", 00:23:04.403 "thread": "nvmf_tgt_poll_group_000", 00:23:04.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:04.403 "listen_address": { 00:23:04.403 "trtype": "RDMA", 00:23:04.403 "adrfam": "IPv4", 00:23:04.403 "traddr": "192.168.100.8", 00:23:04.403 "trsvcid": "4420" 00:23:04.403 }, 00:23:04.403 "peer_address": { 00:23:04.403 "trtype": "RDMA", 00:23:04.403 "adrfam": "IPv4", 00:23:04.403 "traddr": "192.168.100.8", 00:23:04.403 "trsvcid": "59544" 00:23:04.403 }, 00:23:04.403 "auth": { 00:23:04.403 "state": "completed", 00:23:04.403 "digest": "sha384", 00:23:04.403 "dhgroup": "ffdhe2048" 00:23:04.403 } 00:23:04.403 } 00:23:04.403 ]' 00:23:04.403 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.661 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.920 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:04.920 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:05.488 15:27:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.488 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:05.488 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.488 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.745 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.003 00:23:06.003 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.003 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.003 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.349 { 00:23:06.349 "cntlid": 63, 00:23:06.349 "qid": 0, 00:23:06.349 "state": "enabled", 00:23:06.349 "thread": "nvmf_tgt_poll_group_000", 00:23:06.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:06.349 "listen_address": { 00:23:06.349 "trtype": "RDMA", 00:23:06.349 "adrfam": "IPv4", 00:23:06.349 "traddr": "192.168.100.8", 00:23:06.349 "trsvcid": "4420" 00:23:06.349 }, 00:23:06.349 "peer_address": { 00:23:06.349 "trtype": "RDMA", 00:23:06.349 "adrfam": "IPv4", 00:23:06.349 "traddr": "192.168.100.8", 00:23:06.349 "trsvcid": "47189" 00:23:06.349 }, 00:23:06.349 "auth": { 00:23:06.349 "state": "completed", 00:23:06.349 "digest": "sha384", 00:23:06.349 "dhgroup": "ffdhe2048" 00:23:06.349 } 00:23:06.349 } 00:23:06.349 ]' 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.349 15:27:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.658 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:06.658 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:07.252 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:07.510 15:27:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.510 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.077 00:23:08.077 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.077 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.077 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.077 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.078 { 00:23:08.078 "cntlid": 65, 00:23:08.078 "qid": 0, 00:23:08.078 "state": "enabled", 00:23:08.078 "thread": "nvmf_tgt_poll_group_000", 00:23:08.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:08.078 "listen_address": { 00:23:08.078 "trtype": "RDMA", 00:23:08.078 "adrfam": "IPv4", 00:23:08.078 "traddr": "192.168.100.8", 00:23:08.078 "trsvcid": "4420" 00:23:08.078 }, 00:23:08.078 "peer_address": { 00:23:08.078 "trtype": "RDMA", 00:23:08.078 "adrfam": "IPv4", 00:23:08.078 "traddr": "192.168.100.8", 00:23:08.078 "trsvcid": "49360" 00:23:08.078 }, 00:23:08.078 "auth": { 00:23:08.078 "state": "completed", 00:23:08.078 "digest": "sha384", 00:23:08.078 "dhgroup": "ffdhe3072" 00:23:08.078 } 00:23:08.078 } 00:23:08.078 ]' 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:08.078 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.336 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:08.336 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.336 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.336 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.336 15:27:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.594 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:08.594 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:09.160 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.418 15:27:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.676 00:23:09.676 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.676 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.676 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.935 { 00:23:09.935 "cntlid": 67, 00:23:09.935 "qid": 0, 00:23:09.935 "state": "enabled", 00:23:09.935 "thread": "nvmf_tgt_poll_group_000", 00:23:09.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:09.935 "listen_address": { 00:23:09.935 "trtype": "RDMA", 00:23:09.935 "adrfam": "IPv4", 00:23:09.935 "traddr": "192.168.100.8", 00:23:09.935 "trsvcid": "4420" 00:23:09.935 }, 00:23:09.935 "peer_address": { 00:23:09.935 "trtype": "RDMA", 00:23:09.935 "adrfam": "IPv4", 00:23:09.935 "traddr": "192.168.100.8", 00:23:09.935 "trsvcid": "34233" 00:23:09.935 }, 00:23:09.935 "auth": { 00:23:09.935 "state": "completed", 00:23:09.935 "digest": "sha384", 00:23:09.935 "dhgroup": "ffdhe3072" 00:23:09.935 } 00:23:09.935 } 00:23:09.935 ]' 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.935 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.193 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.193 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.193 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.451 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:10.451 15:27:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.018 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.276 15:27:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.534 00:23:11.534 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.534 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.534 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.793 { 00:23:11.793 "cntlid": 69, 00:23:11.793 "qid": 0, 00:23:11.793 "state": "enabled", 00:23:11.793 "thread": "nvmf_tgt_poll_group_000", 00:23:11.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:11.793 "listen_address": { 00:23:11.793 "trtype": "RDMA", 00:23:11.793 "adrfam": "IPv4", 00:23:11.793 "traddr": "192.168.100.8", 00:23:11.793 "trsvcid": "4420" 00:23:11.793 }, 00:23:11.793 "peer_address": { 00:23:11.793 "trtype": "RDMA", 00:23:11.793 "adrfam": "IPv4", 00:23:11.793 "traddr": "192.168.100.8", 00:23:11.793 "trsvcid": "47353" 00:23:11.793 }, 00:23:11.793 "auth": { 00:23:11.793 "state": "completed", 00:23:11.793 "digest": "sha384", 00:23:11.793 "dhgroup": "ffdhe3072" 00:23:11.793 } 00:23:11.793 } 00:23:11.793 ]' 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.793 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.051 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:12.051 15:27:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.985 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.243 00:23:13.243 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.243 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.243 15:27:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.502 { 00:23:13.502 "cntlid": 71, 00:23:13.502 "qid": 0, 00:23:13.502 "state": "enabled", 00:23:13.502 "thread": "nvmf_tgt_poll_group_000", 00:23:13.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:13.502 "listen_address": { 00:23:13.502 "trtype": "RDMA", 00:23:13.502 "adrfam": "IPv4", 00:23:13.502 "traddr": "192.168.100.8", 00:23:13.502 "trsvcid": "4420" 00:23:13.502 }, 00:23:13.502 "peer_address": { 00:23:13.502 "trtype": "RDMA", 00:23:13.502 "adrfam": "IPv4", 00:23:13.502 "traddr": "192.168.100.8", 00:23:13.502 "trsvcid": "37329" 00:23:13.502 }, 00:23:13.502 "auth": { 00:23:13.502 "state": "completed", 00:23:13.502 "digest": "sha384", 00:23:13.502 "dhgroup": "ffdhe3072" 00:23:13.502 } 00:23:13.502 } 00:23:13.502 ]' 00:23:13.502 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.760 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.018 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:14.018 15:27:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.585 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.844 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.103 00:23:15.103 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.103 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.103 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.362 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.362 { 00:23:15.362 "cntlid": 73, 00:23:15.362 "qid": 0, 00:23:15.362 "state": "enabled", 00:23:15.362 "thread": "nvmf_tgt_poll_group_000", 00:23:15.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:15.362 "listen_address": { 00:23:15.363 "trtype": "RDMA", 00:23:15.363 "adrfam": "IPv4", 00:23:15.363 "traddr": "192.168.100.8", 00:23:15.363 "trsvcid": "4420" 00:23:15.363 }, 00:23:15.363 "peer_address": { 00:23:15.363 "trtype": "RDMA", 00:23:15.363 "adrfam": "IPv4", 00:23:15.363 "traddr": "192.168.100.8", 00:23:15.363 "trsvcid": "58199" 00:23:15.363 }, 00:23:15.363 "auth": { 00:23:15.363 "state": "completed", 00:23:15.363 "digest": "sha384", 00:23:15.363 "dhgroup": "ffdhe4096" 00:23:15.363 } 00:23:15.363 } 00:23:15.363 ]' 00:23:15.363 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.363 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.363 15:27:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.622 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:15.622 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.622 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.622 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.622 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.881 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:15.882 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:16.450 15:27:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.450 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.710 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.969 00:23:16.969 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.969 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.970 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.229 { 00:23:17.229 "cntlid": 75, 00:23:17.229 "qid": 0, 00:23:17.229 "state": "enabled", 00:23:17.229 "thread": "nvmf_tgt_poll_group_000", 00:23:17.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:17.229 "listen_address": { 00:23:17.229 "trtype": "RDMA", 00:23:17.229 "adrfam": "IPv4", 00:23:17.229 "traddr": "192.168.100.8", 00:23:17.229 "trsvcid": "4420" 00:23:17.229 }, 00:23:17.229 "peer_address": { 00:23:17.229 "trtype": "RDMA", 00:23:17.229 "adrfam": "IPv4", 00:23:17.229 "traddr": "192.168.100.8", 00:23:17.229 "trsvcid": "54674" 00:23:17.229 }, 00:23:17.229 "auth": { 00:23:17.229 "state": "completed", 00:23:17.229 "digest": "sha384", 00:23:17.229 "dhgroup": "ffdhe4096" 00:23:17.229 } 00:23:17.229 } 00:23:17.229 ]' 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:17.229 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.488 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.488 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.488 15:27:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.748 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:17.748 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:18.316 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.317 15:27:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.576 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.836 00:23:18.836 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.836 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.836 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.096 { 00:23:19.096 "cntlid": 77, 00:23:19.096 "qid": 0, 00:23:19.096 "state": "enabled", 00:23:19.096 "thread": "nvmf_tgt_poll_group_000", 00:23:19.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:19.096 "listen_address": { 00:23:19.096 "trtype": "RDMA", 00:23:19.096 "adrfam": "IPv4", 00:23:19.096 "traddr": "192.168.100.8", 00:23:19.096 "trsvcid": "4420" 00:23:19.096 }, 00:23:19.096 "peer_address": { 00:23:19.096 "trtype": "RDMA", 00:23:19.096 "adrfam": "IPv4", 00:23:19.096 "traddr": "192.168.100.8", 00:23:19.096 "trsvcid": "54520" 00:23:19.096 }, 00:23:19.096 "auth": { 00:23:19.096 "state": "completed", 00:23:19.096 "digest": "sha384", 00:23:19.096 "dhgroup": "ffdhe4096" 00:23:19.096 } 00:23:19.096 } 00:23:19.096 ]' 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:19.096 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.356 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.356 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.356 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.356 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:19.356 15:27:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.295 15:27:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.865 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.865 { 00:23:20.865 "cntlid": 79, 00:23:20.865 "qid": 0, 00:23:20.865 "state": "enabled", 00:23:20.865 "thread": "nvmf_tgt_poll_group_000", 00:23:20.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:20.865 "listen_address": { 00:23:20.865 "trtype": "RDMA", 00:23:20.865 "adrfam": "IPv4", 00:23:20.865 "traddr": "192.168.100.8", 00:23:20.865 "trsvcid": "4420" 00:23:20.865 }, 00:23:20.865 "peer_address": { 00:23:20.865 "trtype": "RDMA", 00:23:20.865 "adrfam": "IPv4", 00:23:20.865 "traddr": "192.168.100.8", 00:23:20.865 "trsvcid": "55543" 00:23:20.865 }, 00:23:20.865 "auth": { 00:23:20.865 "state": "completed", 00:23:20.865 "digest": "sha384", 00:23:20.865 "dhgroup": "ffdhe4096" 00:23:20.865 } 00:23:20.865 } 00:23:20.865 ]' 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.865 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.124 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.124 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.124 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.124 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.124 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.384 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:21.384 15:27:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.953 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.211 15:27:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.470 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.730 { 00:23:22.730 "cntlid": 81, 00:23:22.730 "qid": 0, 00:23:22.730 "state": "enabled", 00:23:22.730 "thread": "nvmf_tgt_poll_group_000", 00:23:22.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:22.730 "listen_address": { 00:23:22.730 "trtype": "RDMA", 00:23:22.730 "adrfam": "IPv4", 00:23:22.730 "traddr": "192.168.100.8", 00:23:22.730 "trsvcid": "4420" 00:23:22.730 }, 00:23:22.730 "peer_address": { 00:23:22.730 "trtype": "RDMA", 00:23:22.730 "adrfam": "IPv4", 00:23:22.730 "traddr": "192.168.100.8", 00:23:22.730 "trsvcid": "49110" 00:23:22.730 }, 00:23:22.730 "auth": { 00:23:22.730 "state": "completed", 00:23:22.730 "digest": "sha384", 00:23:22.730 "dhgroup": "ffdhe6144" 00:23:22.730 } 00:23:22.730 } 00:23:22.730 ]' 00:23:22.730 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.989 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.249 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:23.249 15:27:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:23.817 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.077 15:27:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.651 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.651 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.911 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.911 { 00:23:24.911 "cntlid": 83, 00:23:24.911 "qid": 0, 00:23:24.911 "state": "enabled", 00:23:24.911 "thread": "nvmf_tgt_poll_group_000", 00:23:24.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:24.911 "listen_address": { 00:23:24.911 "trtype": "RDMA", 00:23:24.911 "adrfam": "IPv4", 00:23:24.911 "traddr": "192.168.100.8", 00:23:24.911 "trsvcid": "4420" 00:23:24.911 }, 00:23:24.911 "peer_address": { 00:23:24.911 "trtype": "RDMA", 00:23:24.911 "adrfam": "IPv4", 00:23:24.911 "traddr": "192.168.100.8", 00:23:24.911 "trsvcid": "39804" 00:23:24.912 }, 00:23:24.912 "auth": { 00:23:24.912 "state": "completed", 00:23:24.912 "digest": "sha384", 00:23:24.912 "dhgroup": "ffdhe6144" 00:23:24.912 } 00:23:24.912 } 00:23:24.912 ]' 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.912 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.171 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:25.171 15:27:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:25.740 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.001 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.260 15:27:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.522 00:23:26.522 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.522 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.522 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.782 { 00:23:26.782 "cntlid": 85, 00:23:26.782 "qid": 0, 00:23:26.782 "state": "enabled", 00:23:26.782 "thread": "nvmf_tgt_poll_group_000", 00:23:26.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:26.782 "listen_address": { 00:23:26.782 "trtype": "RDMA", 00:23:26.782 "adrfam": "IPv4", 00:23:26.782 "traddr": "192.168.100.8", 00:23:26.782 "trsvcid": "4420" 00:23:26.782 }, 00:23:26.782 "peer_address": { 00:23:26.782 "trtype": "RDMA", 00:23:26.782 "adrfam": "IPv4", 00:23:26.782 "traddr": "192.168.100.8", 00:23:26.782 "trsvcid": "58369" 00:23:26.782 }, 00:23:26.782 "auth": { 00:23:26.782 "state": "completed", 00:23:26.782 "digest": "sha384", 00:23:26.782 "dhgroup": "ffdhe6144" 00:23:26.782 } 00:23:26.782 } 00:23:26.782 ]' 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.782 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.041 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:27.041 15:27:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:27.609 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.869 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.128 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:28.129 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.129 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.388 00:23:28.388 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.388 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.388 15:27:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.648 { 00:23:28.648 "cntlid": 87, 00:23:28.648 "qid": 0, 00:23:28.648 "state": "enabled", 00:23:28.648 "thread": "nvmf_tgt_poll_group_000", 00:23:28.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:28.648 "listen_address": { 00:23:28.648 "trtype": "RDMA", 00:23:28.648 "adrfam": "IPv4", 00:23:28.648 "traddr": "192.168.100.8", 00:23:28.648 "trsvcid": "4420" 00:23:28.648 }, 00:23:28.648 "peer_address": { 00:23:28.648 "trtype": "RDMA", 00:23:28.648 "adrfam": "IPv4", 00:23:28.648 "traddr": "192.168.100.8", 00:23:28.648 "trsvcid": "59187" 00:23:28.648 }, 00:23:28.648 "auth": { 00:23:28.648 "state": "completed", 00:23:28.648 "digest": "sha384", 00:23:28.648 "dhgroup": "ffdhe6144" 00:23:28.648 } 00:23:28.648 } 00:23:28.648 ]' 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:28.648 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.907 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.907 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.907 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.907 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:28.907 15:27:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.846 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.415 00:23:30.415 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.415 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.415 15:27:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.674 { 00:23:30.674 "cntlid": 89, 00:23:30.674 "qid": 0, 00:23:30.674 "state": "enabled", 00:23:30.674 "thread": "nvmf_tgt_poll_group_000", 00:23:30.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:30.674 "listen_address": { 00:23:30.674 "trtype": "RDMA", 00:23:30.674 "adrfam": "IPv4", 00:23:30.674 "traddr": "192.168.100.8", 00:23:30.674 "trsvcid": "4420" 00:23:30.674 }, 00:23:30.674 "peer_address": { 00:23:30.674 "trtype": "RDMA", 00:23:30.674 "adrfam": "IPv4", 00:23:30.674 "traddr": "192.168.100.8", 00:23:30.674 "trsvcid": "44011" 00:23:30.674 }, 00:23:30.674 "auth": { 00:23:30.674 "state": "completed", 00:23:30.674 "digest": "sha384", 00:23:30.674 "dhgroup": "ffdhe8192" 00:23:30.674 } 00:23:30.674 } 00:23:30.674 ]' 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.674 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.933 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.934 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.934 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.934 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:30.934 15:27:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.872 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.131 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.131 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.131 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.131 15:27:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.391 00:23:32.391 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.391 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.391 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.651 { 00:23:32.651 "cntlid": 91, 00:23:32.651 "qid": 0, 00:23:32.651 "state": "enabled", 00:23:32.651 "thread": "nvmf_tgt_poll_group_000", 00:23:32.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:32.651 "listen_address": { 00:23:32.651 "trtype": "RDMA", 00:23:32.651 "adrfam": "IPv4", 00:23:32.651 "traddr": "192.168.100.8", 00:23:32.651 "trsvcid": "4420" 00:23:32.651 }, 00:23:32.651 "peer_address": { 00:23:32.651 "trtype": "RDMA", 00:23:32.651 "adrfam": "IPv4", 00:23:32.651 "traddr": "192.168.100.8", 00:23:32.651 "trsvcid": "60018" 00:23:32.651 }, 00:23:32.651 "auth": { 00:23:32.651 "state": "completed", 00:23:32.651 "digest": "sha384", 00:23:32.651 "dhgroup": "ffdhe8192" 00:23:32.651 } 00:23:32.651 } 00:23:32.651 ]' 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.651 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.911 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.911 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.911 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.911 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.911 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.171 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:33.171 15:28:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.739 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:33.998 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.999 15:28:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.566 00:23:34.567 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.567 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.567 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.826 { 00:23:34.826 "cntlid": 93, 00:23:34.826 "qid": 0, 00:23:34.826 "state": "enabled", 00:23:34.826 "thread": "nvmf_tgt_poll_group_000", 00:23:34.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:34.826 "listen_address": { 00:23:34.826 "trtype": "RDMA", 00:23:34.826 "adrfam": "IPv4", 00:23:34.826 "traddr": "192.168.100.8", 00:23:34.826 "trsvcid": "4420" 00:23:34.826 }, 00:23:34.826 "peer_address": { 00:23:34.826 "trtype": "RDMA", 00:23:34.826 "adrfam": "IPv4", 00:23:34.826 "traddr": "192.168.100.8", 00:23:34.826 "trsvcid": "45531" 00:23:34.826 }, 00:23:34.826 "auth": { 00:23:34.826 "state": "completed", 00:23:34.826 "digest": "sha384", 00:23:34.826 "dhgroup": "ffdhe8192" 00:23:34.826 } 00:23:34.826 } 00:23:34.826 ]' 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.826 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.086 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:35.086 15:28:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:35.655 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.914 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.174 15:28:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.433 00:23:36.433 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.433 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.433 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.692 { 00:23:36.692 "cntlid": 95, 00:23:36.692 "qid": 0, 00:23:36.692 "state": "enabled", 00:23:36.692 "thread": "nvmf_tgt_poll_group_000", 00:23:36.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:36.692 "listen_address": { 00:23:36.692 "trtype": "RDMA", 00:23:36.692 "adrfam": "IPv4", 00:23:36.692 "traddr": "192.168.100.8", 00:23:36.692 "trsvcid": "4420" 00:23:36.692 }, 00:23:36.692 "peer_address": { 00:23:36.692 "trtype": "RDMA", 00:23:36.692 "adrfam": "IPv4", 00:23:36.692 "traddr": "192.168.100.8", 00:23:36.692 "trsvcid": "47479" 00:23:36.692 }, 00:23:36.692 "auth": { 00:23:36.692 "state": "completed", 00:23:36.692 "digest": "sha384", 00:23:36.692 "dhgroup": "ffdhe8192" 00:23:36.692 } 00:23:36.692 } 00:23:36.692 ]' 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.692 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.953 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.953 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.953 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.953 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.953 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.212 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:37.212 15:28:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:37.781 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.041 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.301 00:23:38.301 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.301 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.301 15:28:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.560 { 00:23:38.560 "cntlid": 97, 00:23:38.560 "qid": 0, 00:23:38.560 "state": "enabled", 00:23:38.560 "thread": "nvmf_tgt_poll_group_000", 00:23:38.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:38.560 "listen_address": { 00:23:38.560 "trtype": "RDMA", 00:23:38.560 "adrfam": "IPv4", 00:23:38.560 "traddr": "192.168.100.8", 00:23:38.560 "trsvcid": "4420" 00:23:38.560 }, 00:23:38.560 "peer_address": { 00:23:38.560 "trtype": "RDMA", 00:23:38.560 "adrfam": "IPv4", 00:23:38.560 "traddr": "192.168.100.8", 00:23:38.560 "trsvcid": "45038" 00:23:38.560 }, 00:23:38.560 "auth": { 00:23:38.560 "state": "completed", 00:23:38.560 "digest": "sha512", 00:23:38.560 "dhgroup": "null" 00:23:38.560 } 00:23:38.560 } 00:23:38.560 ]' 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.560 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.820 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:38.820 15:28:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:39.389 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:39.649 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.908 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.168 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.168 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.427 { 00:23:40.427 "cntlid": 99, 00:23:40.427 "qid": 0, 00:23:40.427 "state": "enabled", 00:23:40.427 "thread": "nvmf_tgt_poll_group_000", 00:23:40.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:40.427 "listen_address": { 00:23:40.427 "trtype": "RDMA", 00:23:40.427 "adrfam": "IPv4", 00:23:40.427 "traddr": "192.168.100.8", 00:23:40.427 "trsvcid": "4420" 00:23:40.427 }, 00:23:40.427 "peer_address": { 00:23:40.427 "trtype": "RDMA", 00:23:40.427 "adrfam": "IPv4", 00:23:40.427 "traddr": "192.168.100.8", 00:23:40.427 "trsvcid": "35101" 00:23:40.427 }, 00:23:40.427 "auth": { 00:23:40.427 "state": "completed", 00:23:40.427 "digest": "sha512", 00:23:40.427 "dhgroup": "null" 00:23:40.427 } 00:23:40.427 } 00:23:40.427 ]' 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.427 15:28:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.689 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:40.689 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:41.262 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.262 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:41.262 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.262 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.521 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.521 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:41.521 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:41.521 15:28:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.521 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.780 00:23:41.780 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.780 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.780 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.039 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:42.040 { 00:23:42.040 "cntlid": 101, 00:23:42.040 "qid": 0, 00:23:42.040 "state": "enabled", 00:23:42.040 "thread": "nvmf_tgt_poll_group_000", 00:23:42.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:42.040 "listen_address": { 00:23:42.040 "trtype": "RDMA", 00:23:42.040 "adrfam": "IPv4", 00:23:42.040 "traddr": "192.168.100.8", 00:23:42.040 "trsvcid": "4420" 00:23:42.040 }, 00:23:42.040 "peer_address": { 00:23:42.040 "trtype": "RDMA", 00:23:42.040 "adrfam": "IPv4", 00:23:42.040 "traddr": "192.168.100.8", 00:23:42.040 "trsvcid": "39793" 00:23:42.040 }, 00:23:42.040 "auth": { 00:23:42.040 "state": "completed", 00:23:42.040 "digest": "sha512", 00:23:42.040 "dhgroup": "null" 00:23:42.040 } 00:23:42.040 } 00:23:42.040 ]' 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.040 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:42.298 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:42.298 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:42.298 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.298 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.298 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.557 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:42.557 15:28:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:43.126 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:43.127 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.386 15:28:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.645 00:23:43.645 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.645 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.645 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.905 { 00:23:43.905 "cntlid": 103, 00:23:43.905 "qid": 0, 00:23:43.905 "state": "enabled", 00:23:43.905 "thread": "nvmf_tgt_poll_group_000", 00:23:43.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:43.905 "listen_address": { 00:23:43.905 "trtype": "RDMA", 00:23:43.905 "adrfam": "IPv4", 00:23:43.905 "traddr": "192.168.100.8", 00:23:43.905 "trsvcid": "4420" 00:23:43.905 }, 00:23:43.905 "peer_address": { 00:23:43.905 "trtype": "RDMA", 00:23:43.905 "adrfam": "IPv4", 00:23:43.905 "traddr": "192.168.100.8", 00:23:43.905 "trsvcid": "49401" 00:23:43.905 }, 00:23:43.905 "auth": { 00:23:43.905 "state": "completed", 00:23:43.905 "digest": "sha512", 00:23:43.905 "dhgroup": "null" 00:23:43.905 } 00:23:43.905 } 00:23:43.905 ]' 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.905 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.165 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:44.165 15:28:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:44.733 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:44.993 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.252 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.512 00:23:45.512 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:45.512 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:45.512 15:28:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.771 { 00:23:45.771 "cntlid": 105, 00:23:45.771 "qid": 0, 00:23:45.771 "state": "enabled", 00:23:45.771 "thread": "nvmf_tgt_poll_group_000", 00:23:45.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:45.771 "listen_address": { 00:23:45.771 "trtype": "RDMA", 00:23:45.771 "adrfam": "IPv4", 00:23:45.771 "traddr": "192.168.100.8", 00:23:45.771 "trsvcid": "4420" 00:23:45.771 }, 00:23:45.771 "peer_address": { 00:23:45.771 "trtype": "RDMA", 00:23:45.771 "adrfam": "IPv4", 00:23:45.771 "traddr": "192.168.100.8", 00:23:45.771 "trsvcid": "47995" 00:23:45.771 }, 00:23:45.771 "auth": { 00:23:45.771 "state": "completed", 00:23:45.771 "digest": "sha512", 00:23:45.771 "dhgroup": "ffdhe2048" 00:23:45.771 } 00:23:45.771 } 00:23:45.771 ]' 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.771 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.031 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:46.031 15:28:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:46.600 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:46.860 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.119 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.119 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.379 { 00:23:47.379 "cntlid": 107, 00:23:47.379 "qid": 0, 00:23:47.379 "state": "enabled", 00:23:47.379 "thread": "nvmf_tgt_poll_group_000", 00:23:47.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:47.379 "listen_address": { 00:23:47.379 "trtype": "RDMA", 00:23:47.379 "adrfam": "IPv4", 00:23:47.379 "traddr": "192.168.100.8", 00:23:47.379 "trsvcid": "4420" 00:23:47.379 }, 00:23:47.379 "peer_address": { 00:23:47.379 "trtype": "RDMA", 00:23:47.379 "adrfam": "IPv4", 00:23:47.379 "traddr": "192.168.100.8", 00:23:47.379 "trsvcid": "34927" 00:23:47.379 }, 00:23:47.379 "auth": { 00:23:47.379 "state": "completed", 00:23:47.379 "digest": "sha512", 00:23:47.379 "dhgroup": "ffdhe2048" 00:23:47.379 } 00:23:47.379 } 00:23:47.379 ]' 00:23:47.379 15:28:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.379 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:47.379 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.639 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:47.639 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.639 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.639 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.639 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.899 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:47.899 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:48.475 15:28:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:48.475 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.736 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.995 00:23:48.995 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.995 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.995 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.254 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.254 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.254 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.255 { 00:23:49.255 "cntlid": 109, 00:23:49.255 "qid": 0, 00:23:49.255 "state": "enabled", 00:23:49.255 "thread": "nvmf_tgt_poll_group_000", 00:23:49.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:49.255 "listen_address": { 00:23:49.255 "trtype": "RDMA", 00:23:49.255 "adrfam": "IPv4", 00:23:49.255 "traddr": "192.168.100.8", 00:23:49.255 "trsvcid": "4420" 00:23:49.255 }, 00:23:49.255 "peer_address": { 00:23:49.255 "trtype": "RDMA", 00:23:49.255 "adrfam": "IPv4", 00:23:49.255 "traddr": "192.168.100.8", 00:23:49.255 "trsvcid": "52602" 00:23:49.255 }, 00:23:49.255 "auth": { 00:23:49.255 "state": "completed", 00:23:49.255 "digest": "sha512", 00:23:49.255 "dhgroup": "ffdhe2048" 00:23:49.255 } 00:23:49.255 } 00:23:49.255 ]' 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:49.255 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.514 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.514 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.514 15:28:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.514 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:49.514 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:50.453 15:28:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:50.714 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:50.974 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.975 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.234 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.234 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.234 { 00:23:51.234 "cntlid": 111, 00:23:51.234 "qid": 0, 00:23:51.234 "state": "enabled", 00:23:51.234 "thread": "nvmf_tgt_poll_group_000", 00:23:51.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:51.234 "listen_address": { 00:23:51.234 "trtype": "RDMA", 00:23:51.234 "adrfam": "IPv4", 00:23:51.235 "traddr": "192.168.100.8", 00:23:51.235 "trsvcid": "4420" 00:23:51.235 }, 00:23:51.235 "peer_address": { 00:23:51.235 "trtype": "RDMA", 00:23:51.235 "adrfam": "IPv4", 00:23:51.235 "traddr": "192.168.100.8", 00:23:51.235 "trsvcid": "55366" 00:23:51.235 }, 00:23:51.235 "auth": { 00:23:51.235 "state": "completed", 00:23:51.235 "digest": "sha512", 00:23:51.235 "dhgroup": "ffdhe2048" 00:23:51.235 } 00:23:51.235 } 00:23:51.235 ]' 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.235 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.494 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:51.494 15:28:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:52.062 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.063 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.321 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.322 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.322 15:28:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.580 00:23:52.580 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.580 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.580 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.840 { 00:23:52.840 "cntlid": 113, 00:23:52.840 "qid": 0, 00:23:52.840 "state": "enabled", 00:23:52.840 "thread": "nvmf_tgt_poll_group_000", 00:23:52.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:52.840 "listen_address": { 00:23:52.840 "trtype": "RDMA", 00:23:52.840 "adrfam": "IPv4", 00:23:52.840 "traddr": "192.168.100.8", 00:23:52.840 "trsvcid": "4420" 00:23:52.840 }, 00:23:52.840 "peer_address": { 00:23:52.840 "trtype": "RDMA", 00:23:52.840 "adrfam": "IPv4", 00:23:52.840 "traddr": "192.168.100.8", 00:23:52.840 "trsvcid": "44909" 00:23:52.840 }, 00:23:52.840 "auth": { 00:23:52.840 "state": "completed", 00:23:52.840 "digest": "sha512", 00:23:52.840 "dhgroup": "ffdhe3072" 00:23:52.840 } 00:23:52.840 } 00:23:52.840 ]' 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.840 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.099 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:53.099 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.099 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.099 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.099 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.358 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:53.358 15:28:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.927 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.186 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.187 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.187 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.187 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.187 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.187 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.446 00:23:54.446 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:54.446 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:54.446 15:28:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.705 { 00:23:54.705 "cntlid": 115, 00:23:54.705 "qid": 0, 00:23:54.705 "state": "enabled", 00:23:54.705 "thread": "nvmf_tgt_poll_group_000", 00:23:54.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:54.705 "listen_address": { 00:23:54.705 "trtype": "RDMA", 00:23:54.705 "adrfam": "IPv4", 00:23:54.705 "traddr": "192.168.100.8", 00:23:54.705 "trsvcid": "4420" 00:23:54.705 }, 00:23:54.705 "peer_address": { 00:23:54.705 "trtype": "RDMA", 00:23:54.705 "adrfam": "IPv4", 00:23:54.705 "traddr": "192.168.100.8", 00:23:54.705 "trsvcid": "59851" 00:23:54.705 }, 00:23:54.705 "auth": { 00:23:54.705 "state": "completed", 00:23:54.705 "digest": "sha512", 00:23:54.705 "dhgroup": "ffdhe3072" 00:23:54.705 } 00:23:54.705 } 00:23:54.705 ]' 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.705 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.706 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.965 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:54.965 15:28:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:23:55.533 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.792 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.052 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.311 00:23:56.311 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.311 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.312 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.571 { 00:23:56.571 "cntlid": 117, 00:23:56.571 "qid": 0, 00:23:56.571 "state": "enabled", 00:23:56.571 "thread": "nvmf_tgt_poll_group_000", 00:23:56.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:56.571 "listen_address": { 00:23:56.571 "trtype": "RDMA", 00:23:56.571 "adrfam": "IPv4", 00:23:56.571 "traddr": "192.168.100.8", 00:23:56.571 "trsvcid": "4420" 00:23:56.571 }, 00:23:56.571 "peer_address": { 00:23:56.571 "trtype": "RDMA", 00:23:56.571 "adrfam": "IPv4", 00:23:56.571 "traddr": "192.168.100.8", 00:23:56.571 "trsvcid": "43583" 00:23:56.571 }, 00:23:56.571 "auth": { 00:23:56.571 "state": "completed", 00:23:56.571 "digest": "sha512", 00:23:56.571 "dhgroup": "ffdhe3072" 00:23:56.571 } 00:23:56.571 } 00:23:56.571 ]' 00:23:56.571 15:28:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.571 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.830 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:56.830 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:23:57.399 15:28:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:57.399 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.659 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.918 00:23:57.918 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.918 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.918 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.178 { 00:23:58.178 "cntlid": 119, 00:23:58.178 "qid": 0, 00:23:58.178 "state": "enabled", 00:23:58.178 "thread": "nvmf_tgt_poll_group_000", 00:23:58.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:23:58.178 "listen_address": { 00:23:58.178 "trtype": "RDMA", 00:23:58.178 "adrfam": "IPv4", 00:23:58.178 "traddr": "192.168.100.8", 00:23:58.178 "trsvcid": "4420" 00:23:58.178 }, 00:23:58.178 "peer_address": { 00:23:58.178 "trtype": "RDMA", 00:23:58.178 "adrfam": "IPv4", 00:23:58.178 "traddr": "192.168.100.8", 00:23:58.178 "trsvcid": "60801" 00:23:58.178 }, 00:23:58.178 "auth": { 00:23:58.178 "state": "completed", 00:23:58.178 "digest": "sha512", 00:23:58.178 "dhgroup": "ffdhe3072" 00:23:58.178 } 00:23:58.178 } 00:23:58.178 ]' 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.178 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.438 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:58.438 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.438 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.438 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.438 15:28:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.697 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:58.697 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:59.264 15:28:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.523 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.782 00:23:59.782 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.782 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.782 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.041 { 00:24:00.041 "cntlid": 121, 00:24:00.041 "qid": 0, 00:24:00.041 "state": "enabled", 00:24:00.041 "thread": "nvmf_tgt_poll_group_000", 00:24:00.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:00.041 "listen_address": { 00:24:00.041 "trtype": "RDMA", 00:24:00.041 "adrfam": "IPv4", 00:24:00.041 "traddr": "192.168.100.8", 00:24:00.041 "trsvcid": "4420" 00:24:00.041 }, 00:24:00.041 "peer_address": { 00:24:00.041 "trtype": "RDMA", 00:24:00.041 "adrfam": "IPv4", 00:24:00.041 "traddr": "192.168.100.8", 00:24:00.041 "trsvcid": "37760" 00:24:00.041 }, 00:24:00.041 "auth": { 00:24:00.041 "state": "completed", 00:24:00.041 "digest": "sha512", 00:24:00.041 "dhgroup": "ffdhe4096" 00:24:00.041 } 00:24:00.041 } 00:24:00.041 ]' 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.041 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.301 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:00.301 15:28:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.238 15:28:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.497 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.757 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:01.757 { 00:24:01.757 "cntlid": 123, 00:24:01.757 "qid": 0, 00:24:01.757 "state": "enabled", 00:24:01.757 "thread": "nvmf_tgt_poll_group_000", 00:24:01.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:01.757 "listen_address": { 00:24:01.757 "trtype": "RDMA", 00:24:01.757 "adrfam": "IPv4", 00:24:01.757 "traddr": "192.168.100.8", 00:24:01.757 "trsvcid": "4420" 00:24:01.757 }, 00:24:01.757 "peer_address": { 00:24:01.757 "trtype": "RDMA", 00:24:01.757 "adrfam": "IPv4", 00:24:01.758 "traddr": "192.168.100.8", 00:24:01.758 "trsvcid": "47117" 00:24:01.758 }, 00:24:01.758 "auth": { 00:24:01.758 "state": "completed", 00:24:01.758 "digest": "sha512", 00:24:01.758 "dhgroup": "ffdhe4096" 00:24:01.758 } 00:24:01.758 } 00:24:01.758 ]' 00:24:01.758 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.017 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.276 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:02.276 15:28:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:02.848 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.848 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:02.848 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.848 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.108 15:28:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.368 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.627 { 00:24:03.627 "cntlid": 125, 00:24:03.627 "qid": 0, 00:24:03.627 "state": "enabled", 00:24:03.627 "thread": "nvmf_tgt_poll_group_000", 00:24:03.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:03.627 "listen_address": { 00:24:03.627 "trtype": "RDMA", 00:24:03.627 "adrfam": "IPv4", 00:24:03.627 "traddr": "192.168.100.8", 00:24:03.627 "trsvcid": "4420" 00:24:03.627 }, 00:24:03.627 "peer_address": { 00:24:03.627 "trtype": "RDMA", 00:24:03.627 "adrfam": "IPv4", 00:24:03.627 "traddr": "192.168.100.8", 00:24:03.627 "trsvcid": "39541" 00:24:03.627 }, 00:24:03.627 "auth": { 00:24:03.627 "state": "completed", 00:24:03.627 "digest": "sha512", 00:24:03.627 "dhgroup": "ffdhe4096" 00:24:03.627 } 00:24:03.627 } 00:24:03.627 ]' 00:24:03.627 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.886 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.886 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.886 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:03.886 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.887 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.887 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.887 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.146 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:04.146 15:28:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:04.740 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.053 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.398 00:24:05.398 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:05.398 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:05.398 15:28:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:05.658 { 00:24:05.658 "cntlid": 127, 00:24:05.658 "qid": 0, 00:24:05.658 "state": "enabled", 00:24:05.658 "thread": "nvmf_tgt_poll_group_000", 00:24:05.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:05.658 "listen_address": { 00:24:05.658 "trtype": "RDMA", 00:24:05.658 "adrfam": "IPv4", 00:24:05.658 "traddr": "192.168.100.8", 00:24:05.658 "trsvcid": "4420" 00:24:05.658 }, 00:24:05.658 "peer_address": { 00:24:05.658 "trtype": "RDMA", 00:24:05.658 "adrfam": "IPv4", 00:24:05.658 "traddr": "192.168.100.8", 00:24:05.658 "trsvcid": "49882" 00:24:05.658 }, 00:24:05.658 "auth": { 00:24:05.658 "state": "completed", 00:24:05.658 "digest": "sha512", 00:24:05.658 "dhgroup": "ffdhe4096" 00:24:05.658 } 00:24:05.658 } 00:24:05.658 ]' 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.658 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.917 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:05.917 15:28:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:06.484 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.743 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.003 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.003 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.003 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.003 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.262 00:24:07.262 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:07.262 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:07.262 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:07.521 { 00:24:07.521 "cntlid": 129, 00:24:07.521 "qid": 0, 00:24:07.521 "state": "enabled", 00:24:07.521 "thread": "nvmf_tgt_poll_group_000", 00:24:07.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:07.521 "listen_address": { 00:24:07.521 "trtype": "RDMA", 00:24:07.521 "adrfam": "IPv4", 00:24:07.521 "traddr": "192.168.100.8", 00:24:07.521 "trsvcid": "4420" 00:24:07.521 }, 00:24:07.521 "peer_address": { 00:24:07.521 "trtype": "RDMA", 00:24:07.521 "adrfam": "IPv4", 00:24:07.521 "traddr": "192.168.100.8", 00:24:07.521 "trsvcid": "53521" 00:24:07.521 }, 00:24:07.521 "auth": { 00:24:07.521 "state": "completed", 00:24:07.521 "digest": "sha512", 00:24:07.521 "dhgroup": "ffdhe6144" 00:24:07.521 } 00:24:07.521 } 00:24:07.521 ]' 00:24:07.521 15:28:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.521 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.780 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:07.780 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:08.349 15:28:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:08.608 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.867 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.126 00:24:09.126 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:09.126 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.126 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.385 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:09.385 { 00:24:09.385 "cntlid": 131, 00:24:09.385 "qid": 0, 00:24:09.385 "state": "enabled", 00:24:09.385 "thread": "nvmf_tgt_poll_group_000", 00:24:09.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:09.385 "listen_address": { 00:24:09.385 "trtype": "RDMA", 00:24:09.385 "adrfam": "IPv4", 00:24:09.385 "traddr": "192.168.100.8", 00:24:09.385 "trsvcid": "4420" 00:24:09.385 }, 00:24:09.385 "peer_address": { 00:24:09.385 "trtype": "RDMA", 00:24:09.385 "adrfam": "IPv4", 00:24:09.385 "traddr": "192.168.100.8", 00:24:09.385 "trsvcid": "55762" 00:24:09.385 }, 00:24:09.385 "auth": { 00:24:09.385 "state": "completed", 00:24:09.385 "digest": "sha512", 00:24:09.385 "dhgroup": "ffdhe6144" 00:24:09.385 } 00:24:09.385 } 00:24:09.385 ]' 00:24:09.386 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:09.386 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:09.386 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:09.386 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:09.386 15:28:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:09.386 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.386 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.386 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.645 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:09.645 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:10.583 15:28:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.583 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.843 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.102 00:24:11.102 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:11.102 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:11.102 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.361 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.361 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.361 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.361 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.361 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:11.362 { 00:24:11.362 "cntlid": 133, 00:24:11.362 "qid": 0, 00:24:11.362 "state": "enabled", 00:24:11.362 "thread": "nvmf_tgt_poll_group_000", 00:24:11.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:11.362 "listen_address": { 00:24:11.362 "trtype": "RDMA", 00:24:11.362 "adrfam": "IPv4", 00:24:11.362 "traddr": "192.168.100.8", 00:24:11.362 "trsvcid": "4420" 00:24:11.362 }, 00:24:11.362 "peer_address": { 00:24:11.362 "trtype": "RDMA", 00:24:11.362 "adrfam": "IPv4", 00:24:11.362 "traddr": "192.168.100.8", 00:24:11.362 "trsvcid": "44462" 00:24:11.362 }, 00:24:11.362 "auth": { 00:24:11.362 "state": "completed", 00:24:11.362 "digest": "sha512", 00:24:11.362 "dhgroup": "ffdhe6144" 00:24:11.362 } 00:24:11.362 } 00:24:11.362 ]' 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.362 15:28:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.621 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:11.621 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:12.192 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:12.452 15:28:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:12.712 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:12.971 00:24:12.971 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:12.971 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:12.971 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:13.230 { 00:24:13.230 "cntlid": 135, 00:24:13.230 "qid": 0, 00:24:13.230 "state": "enabled", 00:24:13.230 "thread": "nvmf_tgt_poll_group_000", 00:24:13.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:13.230 "listen_address": { 00:24:13.230 "trtype": "RDMA", 00:24:13.230 "adrfam": "IPv4", 00:24:13.230 "traddr": "192.168.100.8", 00:24:13.230 "trsvcid": "4420" 00:24:13.230 }, 00:24:13.230 "peer_address": { 00:24:13.230 "trtype": "RDMA", 00:24:13.230 "adrfam": "IPv4", 00:24:13.230 "traddr": "192.168.100.8", 00:24:13.230 "trsvcid": "50754" 00:24:13.230 }, 00:24:13.230 "auth": { 00:24:13.230 "state": "completed", 00:24:13.230 "digest": "sha512", 00:24:13.230 "dhgroup": "ffdhe6144" 00:24:13.230 } 00:24:13.230 } 00:24:13.230 ]' 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.230 15:28:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.489 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:13.489 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:14.426 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:14.427 15:28:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.427 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.686 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.686 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.686 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.686 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.945 00:24:14.945 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:14.945 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:14.946 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.205 { 00:24:15.205 "cntlid": 137, 00:24:15.205 "qid": 0, 00:24:15.205 "state": "enabled", 00:24:15.205 "thread": "nvmf_tgt_poll_group_000", 00:24:15.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:15.205 "listen_address": { 00:24:15.205 "trtype": "RDMA", 00:24:15.205 "adrfam": "IPv4", 00:24:15.205 "traddr": "192.168.100.8", 00:24:15.205 "trsvcid": "4420" 00:24:15.205 }, 00:24:15.205 "peer_address": { 00:24:15.205 "trtype": "RDMA", 00:24:15.205 "adrfam": "IPv4", 00:24:15.205 "traddr": "192.168.100.8", 00:24:15.205 "trsvcid": "35595" 00:24:15.205 }, 00:24:15.205 "auth": { 00:24:15.205 "state": "completed", 00:24:15.205 "digest": "sha512", 00:24:15.205 "dhgroup": "ffdhe8192" 00:24:15.205 } 00:24:15.205 } 00:24:15.205 ]' 00:24:15.205 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.464 15:28:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.724 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:15.724 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:16.292 15:28:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.552 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.120 00:24:17.120 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:17.120 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:17.120 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:17.379 { 00:24:17.379 "cntlid": 139, 00:24:17.379 "qid": 0, 00:24:17.379 "state": "enabled", 00:24:17.379 "thread": "nvmf_tgt_poll_group_000", 00:24:17.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:17.379 "listen_address": { 00:24:17.379 "trtype": "RDMA", 00:24:17.379 "adrfam": "IPv4", 00:24:17.379 "traddr": "192.168.100.8", 00:24:17.379 "trsvcid": "4420" 00:24:17.379 }, 00:24:17.379 "peer_address": { 00:24:17.379 "trtype": "RDMA", 00:24:17.379 "adrfam": "IPv4", 00:24:17.379 "traddr": "192.168.100.8", 00:24:17.379 "trsvcid": "36241" 00:24:17.379 }, 00:24:17.379 "auth": { 00:24:17.379 "state": "completed", 00:24:17.379 "digest": "sha512", 00:24:17.379 "dhgroup": "ffdhe8192" 00:24:17.379 } 00:24:17.379 } 00:24:17.379 ]' 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.379 15:28:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.637 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:17.638 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: --dhchap-ctrl-secret DHHC-1:02:OTg0NGRlZjlhNGZjMGVjMDIzNzFlZjc1OGY4YThlM2ZhMzY4NjkxMWJiNWU3YmNlvKZ7FQ==: 00:24:18.205 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.464 15:28:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.723 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.290 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:19.290 { 00:24:19.290 "cntlid": 141, 00:24:19.290 "qid": 0, 00:24:19.290 "state": "enabled", 00:24:19.290 "thread": "nvmf_tgt_poll_group_000", 00:24:19.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:19.290 "listen_address": { 00:24:19.290 "trtype": "RDMA", 00:24:19.290 "adrfam": "IPv4", 00:24:19.290 "traddr": "192.168.100.8", 00:24:19.290 "trsvcid": "4420" 00:24:19.290 }, 00:24:19.290 "peer_address": { 00:24:19.290 "trtype": "RDMA", 00:24:19.290 "adrfam": "IPv4", 00:24:19.290 "traddr": "192.168.100.8", 00:24:19.290 "trsvcid": "38289" 00:24:19.290 }, 00:24:19.290 "auth": { 00:24:19.290 "state": "completed", 00:24:19.290 "digest": "sha512", 00:24:19.290 "dhgroup": "ffdhe8192" 00:24:19.290 } 00:24:19.290 } 00:24:19.290 ]' 00:24:19.290 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:19.548 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.548 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:19.548 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:19.548 15:28:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:19.548 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.548 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.548 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.807 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:19.807 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:01:NzQxN2U0YTcxNzZmNzA2NzIzNmViZjUzYzhjZWNiZDmenrbf: 00:24:20.375 15:28:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:20.634 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:21.203 00:24:21.203 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:21.203 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:21.203 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:21.462 { 00:24:21.462 "cntlid": 143, 00:24:21.462 "qid": 0, 00:24:21.462 "state": "enabled", 00:24:21.462 "thread": "nvmf_tgt_poll_group_000", 00:24:21.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:21.462 "listen_address": { 00:24:21.462 "trtype": "RDMA", 00:24:21.462 "adrfam": "IPv4", 00:24:21.462 "traddr": "192.168.100.8", 00:24:21.462 "trsvcid": "4420" 00:24:21.462 }, 00:24:21.462 "peer_address": { 00:24:21.462 "trtype": "RDMA", 00:24:21.462 "adrfam": "IPv4", 00:24:21.462 "traddr": "192.168.100.8", 00:24:21.462 "trsvcid": "35870" 00:24:21.462 }, 00:24:21.462 "auth": { 00:24:21.462 "state": "completed", 00:24:21.462 "digest": "sha512", 00:24:21.462 "dhgroup": "ffdhe8192" 00:24:21.462 } 00:24:21.462 } 00:24:21.462 ]' 00:24:21.462 15:28:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:21.462 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:21.462 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:21.462 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:21.462 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:21.722 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.722 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.722 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.722 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:21.722 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:22.661 15:28:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:22.661 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.920 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.179 00:24:23.438 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:23.438 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:23.438 15:28:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:23.438 { 00:24:23.438 "cntlid": 145, 00:24:23.438 "qid": 0, 00:24:23.438 "state": "enabled", 00:24:23.438 "thread": "nvmf_tgt_poll_group_000", 00:24:23.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:23.438 "listen_address": { 00:24:23.438 "trtype": "RDMA", 00:24:23.438 "adrfam": "IPv4", 00:24:23.438 "traddr": "192.168.100.8", 00:24:23.438 "trsvcid": "4420" 00:24:23.438 }, 00:24:23.438 "peer_address": { 00:24:23.438 "trtype": "RDMA", 00:24:23.438 "adrfam": "IPv4", 00:24:23.438 "traddr": "192.168.100.8", 00:24:23.438 "trsvcid": "39986" 00:24:23.438 }, 00:24:23.438 "auth": { 00:24:23.438 "state": "completed", 00:24:23.438 "digest": "sha512", 00:24:23.438 "dhgroup": "ffdhe8192" 00:24:23.438 } 00:24:23.438 } 00:24:23.438 ]' 00:24:23.438 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.698 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.957 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:23.957 15:28:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTY1NGVjYmRiN2EwZTY3ZDk5NTIxNWE1NTVmYTRlZjEyYmI1ZmJmYmY4MThkNzUx7YvHCQ==: --dhchap-ctrl-secret DHHC-1:03:NzViYzM3ZDIxM2Y3NTQ0Y2ZkYzYwMzNlYmYyOWQyMDZkN2FjYzVkNzg0YjZhMDE2ODI5NGM4OTliNjhkN2MxMk0UkXU=: 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:24.524 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:24.782 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:25.041 request: 00:24:25.041 { 00:24:25.041 "name": "nvme0", 00:24:25.041 "trtype": "rdma", 00:24:25.041 "traddr": "192.168.100.8", 00:24:25.041 "adrfam": "ipv4", 00:24:25.041 "trsvcid": "4420", 00:24:25.041 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:25.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:25.041 "prchk_reftag": false, 00:24:25.041 "prchk_guard": false, 00:24:25.041 "hdgst": false, 00:24:25.041 "ddgst": false, 00:24:25.041 "dhchap_key": "key2", 00:24:25.041 "allow_unrecognized_csi": false, 00:24:25.041 "method": "bdev_nvme_attach_controller", 00:24:25.041 "req_id": 1 00:24:25.041 } 00:24:25.041 Got JSON-RPC error response 00:24:25.041 response: 00:24:25.041 { 00:24:25.041 "code": -5, 00:24:25.041 "message": "Input/output error" 00:24:25.041 } 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.300 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:25.301 15:28:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:25.571 request: 00:24:25.571 { 00:24:25.571 "name": "nvme0", 00:24:25.571 "trtype": "rdma", 00:24:25.571 "traddr": "192.168.100.8", 00:24:25.571 "adrfam": "ipv4", 00:24:25.571 "trsvcid": "4420", 00:24:25.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:25.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:25.571 "prchk_reftag": false, 00:24:25.571 "prchk_guard": false, 00:24:25.571 "hdgst": false, 00:24:25.571 "ddgst": false, 00:24:25.571 "dhchap_key": "key1", 00:24:25.571 "dhchap_ctrlr_key": "ckey2", 00:24:25.571 "allow_unrecognized_csi": false, 00:24:25.571 "method": "bdev_nvme_attach_controller", 00:24:25.571 "req_id": 1 00:24:25.571 } 00:24:25.571 Got JSON-RPC error response 00:24:25.571 response: 00:24:25.571 { 00:24:25.571 "code": -5, 00:24:25.571 "message": "Input/output error" 00:24:25.571 } 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.830 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.089 request: 00:24:26.089 { 00:24:26.089 "name": "nvme0", 00:24:26.089 "trtype": "rdma", 00:24:26.089 "traddr": "192.168.100.8", 00:24:26.089 "adrfam": "ipv4", 00:24:26.089 "trsvcid": "4420", 00:24:26.089 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:26.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:26.090 "prchk_reftag": false, 00:24:26.090 "prchk_guard": false, 00:24:26.090 "hdgst": false, 00:24:26.090 "ddgst": false, 00:24:26.090 "dhchap_key": "key1", 00:24:26.090 "dhchap_ctrlr_key": "ckey1", 00:24:26.090 "allow_unrecognized_csi": false, 00:24:26.090 "method": "bdev_nvme_attach_controller", 00:24:26.090 "req_id": 1 00:24:26.090 } 00:24:26.090 Got JSON-RPC error response 00:24:26.090 response: 00:24:26.090 { 00:24:26.090 "code": -5, 00:24:26.090 "message": "Input/output error" 00:24:26.090 } 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3131304 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3131304 ']' 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3131304 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3131304 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3131304' 00:24:26.349 killing process with pid 3131304 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3131304 00:24:26.349 15:28:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3131304 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3151437 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3151437 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3151437 ']' 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.728 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.670 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:28.670 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:28.670 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.670 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.670 15:28:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3151437 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3151437 ']' 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.670 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 null0 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1r2 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.cHw ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cHw 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tkB 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.z6T ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z6T 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ny6 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.aXq ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aXq 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PwT 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:29.189 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:29.190 15:28:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:30.128 nvme0n1 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:30.128 { 00:24:30.128 "cntlid": 1, 00:24:30.128 "qid": 0, 00:24:30.128 "state": "enabled", 00:24:30.128 "thread": "nvmf_tgt_poll_group_000", 00:24:30.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:30.128 "listen_address": { 00:24:30.128 "trtype": "RDMA", 00:24:30.128 "adrfam": "IPv4", 00:24:30.128 "traddr": "192.168.100.8", 00:24:30.128 "trsvcid": "4420" 00:24:30.128 }, 00:24:30.128 "peer_address": { 00:24:30.128 "trtype": "RDMA", 00:24:30.128 "adrfam": "IPv4", 00:24:30.128 "traddr": "192.168.100.8", 00:24:30.128 "trsvcid": "46713" 00:24:30.128 }, 00:24:30.128 "auth": { 00:24:30.128 "state": "completed", 00:24:30.128 "digest": "sha512", 00:24:30.128 "dhgroup": "ffdhe8192" 00:24:30.128 } 00:24:30.128 } 00:24:30.128 ]' 00:24:30.128 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.387 15:28:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.646 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:30.646 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:31.214 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:31.472 15:28:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:31.732 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:31.733 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:31.733 request: 00:24:31.733 { 00:24:31.733 "name": "nvme0", 00:24:31.733 "trtype": "rdma", 00:24:31.733 "traddr": "192.168.100.8", 00:24:31.733 "adrfam": "ipv4", 00:24:31.733 "trsvcid": "4420", 00:24:31.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:31.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:31.733 "prchk_reftag": false, 00:24:31.733 "prchk_guard": false, 00:24:31.733 "hdgst": false, 00:24:31.733 "ddgst": false, 00:24:31.733 "dhchap_key": "key3", 00:24:31.733 "allow_unrecognized_csi": false, 00:24:31.733 "method": "bdev_nvme_attach_controller", 00:24:31.733 "req_id": 1 00:24:31.733 } 00:24:31.733 Got JSON-RPC error response 00:24:31.733 response: 00:24:31.733 { 00:24:31.733 "code": -5, 00:24:31.733 "message": "Input/output error" 00:24:31.733 } 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:31.992 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:31.993 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:32.251 request: 00:24:32.251 { 00:24:32.251 "name": "nvme0", 00:24:32.251 "trtype": "rdma", 00:24:32.251 "traddr": "192.168.100.8", 00:24:32.251 "adrfam": "ipv4", 00:24:32.251 "trsvcid": "4420", 00:24:32.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:32.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:32.251 "prchk_reftag": false, 00:24:32.251 "prchk_guard": false, 00:24:32.251 "hdgst": false, 00:24:32.251 "ddgst": false, 00:24:32.251 "dhchap_key": "key3", 00:24:32.252 "allow_unrecognized_csi": false, 00:24:32.252 "method": "bdev_nvme_attach_controller", 00:24:32.252 "req_id": 1 00:24:32.252 } 00:24:32.252 Got JSON-RPC error response 00:24:32.252 response: 00:24:32.252 { 00:24:32.252 "code": -5, 00:24:32.252 "message": "Input/output error" 00:24:32.252 } 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.252 15:28:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:32.511 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:33.080 request: 00:24:33.080 { 00:24:33.080 "name": "nvme0", 00:24:33.080 "trtype": "rdma", 00:24:33.080 "traddr": "192.168.100.8", 00:24:33.080 "adrfam": "ipv4", 00:24:33.080 "trsvcid": "4420", 00:24:33.080 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:33.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:33.080 "prchk_reftag": false, 00:24:33.080 "prchk_guard": false, 00:24:33.080 "hdgst": false, 00:24:33.080 "ddgst": false, 00:24:33.080 "dhchap_key": "key0", 00:24:33.080 "dhchap_ctrlr_key": "key1", 00:24:33.080 "allow_unrecognized_csi": false, 00:24:33.080 "method": "bdev_nvme_attach_controller", 00:24:33.080 "req_id": 1 00:24:33.080 } 00:24:33.080 Got JSON-RPC error response 00:24:33.080 response: 00:24:33.080 { 00:24:33.080 "code": -5, 00:24:33.080 "message": "Input/output error" 00:24:33.080 } 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:33.080 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:33.340 nvme0n1 00:24:33.340 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:33.340 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:33.340 15:29:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.599 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.599 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.599 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:33.858 15:29:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:34.427 nvme0n1 00:24:34.427 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:34.427 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:34.427 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:34.687 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.007 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.007 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:35.007 15:29:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: --dhchap-ctrl-secret DHHC-1:03:ZDVkZDU4NTQ2MGE5MDc1ZTA5MTUyOGFjZTkzNTlhZmJkODM5OGE2OTFkZjM2MmI1NTY1ZjA4OGEzMmFkMDMzNNtvy7o=: 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:35.576 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:35.835 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:36.405 request: 00:24:36.405 { 00:24:36.405 "name": "nvme0", 00:24:36.405 "trtype": "rdma", 00:24:36.405 "traddr": "192.168.100.8", 00:24:36.405 "adrfam": "ipv4", 00:24:36.405 "trsvcid": "4420", 00:24:36.405 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:36.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:24:36.405 "prchk_reftag": false, 00:24:36.405 "prchk_guard": false, 00:24:36.405 "hdgst": false, 00:24:36.405 "ddgst": false, 00:24:36.405 "dhchap_key": "key1", 00:24:36.405 "allow_unrecognized_csi": false, 00:24:36.405 "method": "bdev_nvme_attach_controller", 00:24:36.405 "req_id": 1 00:24:36.405 } 00:24:36.405 Got JSON-RPC error response 00:24:36.405 response: 00:24:36.405 { 00:24:36.405 "code": -5, 00:24:36.405 "message": "Input/output error" 00:24:36.405 } 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:36.405 15:29:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:36.974 nvme0n1 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.234 15:29:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:37.493 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:37.752 nvme0n1 00:24:37.752 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:37.753 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.753 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:38.011 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.011 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.011 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: '' 2s 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: ]] 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzkxNzE0MGEyZmUyN2ExMzc0MTNiNWNlNTYzYzMwNjmbHkbL: 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:38.271 15:29:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:40.184 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:40.184 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:40.184 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:40.184 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: 2s 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: ]] 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmJiMzYyODUyM2YzZDcxMmMxMjdmZGM4OGUzODA3OTUyMGY0NTdkNzE0NjAyYzhims64xg==: 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:40.444 15:29:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:42.351 15:29:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:42.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:42.616 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:43.554 nvme0n1 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:43.554 15:29:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:43.813 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:43.813 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:43.813 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:44.072 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.332 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:44.592 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.592 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:44.592 15:29:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:44.851 request: 00:24:44.851 { 00:24:44.851 "name": "nvme0", 00:24:44.851 "dhchap_key": "key1", 00:24:44.851 "dhchap_ctrlr_key": "key3", 00:24:44.851 "method": "bdev_nvme_set_keys", 00:24:44.851 "req_id": 1 00:24:44.851 } 00:24:44.851 Got JSON-RPC error response 00:24:44.851 response: 00:24:44.851 { 00:24:44.851 "code": -13, 00:24:44.851 "message": "Permission denied" 00:24:44.851 } 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:44.851 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.111 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:45.111 15:29:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:46.049 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:46.049 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:46.049 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:46.308 15:29:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:47.247 nvme0n1 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:47.247 15:29:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:47.506 request: 00:24:47.506 { 00:24:47.506 "name": "nvme0", 00:24:47.506 "dhchap_key": "key2", 00:24:47.506 "dhchap_ctrlr_key": "key0", 00:24:47.506 "method": "bdev_nvme_set_keys", 00:24:47.506 "req_id": 1 00:24:47.506 } 00:24:47.506 Got JSON-RPC error response 00:24:47.506 response: 00:24:47.506 { 00:24:47.506 "code": -13, 00:24:47.506 "message": "Permission denied" 00:24:47.506 } 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:47.506 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.765 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:47.765 15:29:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:48.703 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:48.703 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:48.703 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3131351 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3131351 ']' 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3131351 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3131351 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3131351' 00:24:48.963 killing process with pid 3131351 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3131351 00:24:48.963 15:29:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3131351 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:51.502 rmmod nvme_rdma 00:24:51.502 rmmod nvme_fabrics 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3151437 ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3151437 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3151437 ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3151437 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3151437 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3151437' 00:24:51.502 killing process with pid 3151437 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3151437 00:24:51.502 15:29:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3151437 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1r2 /tmp/spdk.key-sha256.tkB /tmp/spdk.key-sha384.ny6 /tmp/spdk.key-sha512.PwT /tmp/spdk.key-sha512.cHw /tmp/spdk.key-sha384.z6T /tmp/spdk.key-sha256.aXq '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:24:52.882 00:24:52.882 real 2m55.570s 00:24:52.882 user 6m39.288s 00:24:52.882 sys 0m26.564s 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.882 ************************************ 00:24:52.882 END TEST nvmf_auth_target 00:24:52.882 ************************************ 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:52.882 ************************************ 00:24:52.882 START TEST nvmf_fuzz 00:24:52.882 ************************************ 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:52.882 * Looking for test storage... 00:24:52.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.882 --rc genhtml_branch_coverage=1 00:24:52.882 --rc genhtml_function_coverage=1 00:24:52.882 --rc genhtml_legend=1 00:24:52.882 --rc geninfo_all_blocks=1 00:24:52.882 --rc geninfo_unexecuted_blocks=1 00:24:52.882 00:24:52.882 ' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.882 --rc genhtml_branch_coverage=1 00:24:52.882 --rc genhtml_function_coverage=1 00:24:52.882 --rc genhtml_legend=1 00:24:52.882 --rc geninfo_all_blocks=1 00:24:52.882 --rc geninfo_unexecuted_blocks=1 00:24:52.882 00:24:52.882 ' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.882 --rc genhtml_branch_coverage=1 00:24:52.882 --rc genhtml_function_coverage=1 00:24:52.882 --rc genhtml_legend=1 00:24:52.882 --rc geninfo_all_blocks=1 00:24:52.882 --rc geninfo_unexecuted_blocks=1 00:24:52.882 00:24:52.882 ' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:52.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.882 --rc genhtml_branch_coverage=1 00:24:52.882 --rc genhtml_function_coverage=1 00:24:52.882 --rc genhtml_legend=1 00:24:52.882 --rc geninfo_all_blocks=1 00:24:52.882 --rc geninfo_unexecuted_blocks=1 00:24:52.882 00:24:52.882 ' 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.882 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.883 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.883 15:29:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:01.016 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:01.016 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:01.017 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:01.017 Found net devices under 0000:18:00.0: mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:01.017 Found net devices under 0000:18:00.1: mlx_0_1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:01.017 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:01.017 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:25:01.017 altname enp24s0f0np0 00:25:01.017 altname ens785f0np0 00:25:01.017 inet 192.168.100.8/24 scope global mlx_0_0 00:25:01.017 valid_lft forever preferred_lft forever 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:01.017 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:01.017 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:25:01.017 altname enp24s0f1np1 00:25:01.017 altname ens785f1np1 00:25:01.017 inet 192.168.100.9/24 scope global mlx_0_1 00:25:01.017 valid_lft forever preferred_lft forever 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:01.017 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:01.018 192.168.100.9' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:01.018 192.168.100.9' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:01.018 192.168.100.9' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3157743 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3157743 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3157743 ']' 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:01.018 15:29:27 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 Malloc0 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:25:01.018 15:29:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:25:33.109 Fuzzing completed. Shutting down the fuzz application 00:25:33.109 00:25:33.109 Dumping successful admin opcodes: 00:25:33.109 8, 9, 10, 24, 00:25:33.109 Dumping successful io opcodes: 00:25:33.109 0, 9, 00:25:33.109 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 745530, total successful commands: 4334, random_seed: 4062121152 00:25:33.109 NS: 0x2000008f0ec0 admin qp, Total commands completed: 124346, total successful commands: 1021, random_seed: 3139145856 00:25:33.109 15:29:59 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:33.678 Fuzzing completed. Shutting down the fuzz application 00:25:33.678 00:25:33.678 Dumping successful admin opcodes: 00:25:33.678 24, 00:25:33.678 Dumping successful io opcodes: 00:25:33.678 00:25:33.678 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3410695992 00:25:33.678 NS: 0x2000008f0ec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3410788166 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:33.678 rmmod nvme_rdma 00:25:33.678 rmmod nvme_fabrics 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3157743 ']' 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3157743 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3157743 ']' 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 3157743 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:33.678 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3157743 00:25:33.937 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:33.937 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:33.937 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3157743' 00:25:33.937 killing process with pid 3157743 00:25:33.937 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 3157743 00:25:33.937 15:30:01 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 3157743 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:35.321 00:25:35.321 real 0m42.557s 00:25:35.321 user 0m55.505s 00:25:35.321 sys 0m19.275s 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:35.321 ************************************ 00:25:35.321 END TEST nvmf_fuzz 00:25:35.321 ************************************ 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:35.321 ************************************ 00:25:35.321 START TEST nvmf_multiconnection 00:25:35.321 ************************************ 00:25:35.321 15:30:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:35.581 * Looking for test storage... 00:25:35.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:35.581 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:35.581 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:35.581 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:35.581 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:35.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.582 --rc genhtml_branch_coverage=1 00:25:35.582 --rc genhtml_function_coverage=1 00:25:35.582 --rc genhtml_legend=1 00:25:35.582 --rc geninfo_all_blocks=1 00:25:35.582 --rc geninfo_unexecuted_blocks=1 00:25:35.582 00:25:35.582 ' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:35.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.582 --rc genhtml_branch_coverage=1 00:25:35.582 --rc genhtml_function_coverage=1 00:25:35.582 --rc genhtml_legend=1 00:25:35.582 --rc geninfo_all_blocks=1 00:25:35.582 --rc geninfo_unexecuted_blocks=1 00:25:35.582 00:25:35.582 ' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:35.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.582 --rc genhtml_branch_coverage=1 00:25:35.582 --rc genhtml_function_coverage=1 00:25:35.582 --rc genhtml_legend=1 00:25:35.582 --rc geninfo_all_blocks=1 00:25:35.582 --rc geninfo_unexecuted_blocks=1 00:25:35.582 00:25:35.582 ' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:35.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.582 --rc genhtml_branch_coverage=1 00:25:35.582 --rc genhtml_function_coverage=1 00:25:35.582 --rc genhtml_legend=1 00:25:35.582 --rc geninfo_all_blocks=1 00:25:35.582 --rc geninfo_unexecuted_blocks=1 00:25:35.582 00:25:35.582 ' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.582 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:35.582 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.583 15:30:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.164 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:42.165 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:42.165 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:42.165 Found net devices under 0000:18:00.0: mlx_0_0 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:42.165 Found net devices under 0000:18:00.1: mlx_0_1 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:42.165 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:42.426 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:42.426 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:25:42.426 altname enp24s0f0np0 00:25:42.426 altname ens785f0np0 00:25:42.426 inet 192.168.100.8/24 scope global mlx_0_0 00:25:42.426 valid_lft forever preferred_lft forever 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:42.426 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:42.427 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:42.427 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:25:42.427 altname enp24s0f1np1 00:25:42.427 altname ens785f1np1 00:25:42.427 inet 192.168.100.9/24 scope global mlx_0_1 00:25:42.427 valid_lft forever preferred_lft forever 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:42.427 192.168.100.9' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:42.427 192.168.100.9' 00:25:42.427 15:30:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:42.427 192.168.100.9' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3165674 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3165674 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 3165674 ']' 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.427 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.686 [2024-11-06 15:30:10.145897] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:42.686 [2024-11-06 15:30:10.146004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.686 [2024-11-06 15:30:10.297728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.946 [2024-11-06 15:30:10.408598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.946 [2024-11-06 15:30:10.408651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.946 [2024-11-06 15:30:10.408663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.946 [2024-11-06 15:30:10.408676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.946 [2024-11-06 15:30:10.408686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.946 [2024-11-06 15:30:10.410995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.946 [2024-11-06 15:30:10.411083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.946 [2024-11-06 15:30:10.411167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.946 [2024-11-06 15:30:10.411191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.515 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:43.515 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:25:43.515 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.515 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.515 15:30:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.515 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.515 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:43.515 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.515 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.515 [2024-11-06 15:30:11.046018] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fe89fba6940) succeed. 00:25:43.515 [2024-11-06 15:30:11.055573] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fe89fb62940) succeed. 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.775 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.034 Malloc1 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 [2024-11-06 15:30:11.465297] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 Malloc2 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 Malloc3 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.035 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 Malloc4 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 Malloc5 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.295 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 Malloc6 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 Malloc7 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 Malloc8 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.555 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.556 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 Malloc9 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 Malloc10 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.815 Malloc11 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.815 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.075 15:30:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:46.014 15:30:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:46.014 15:30:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:46.014 15:30:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.014 15:30:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:46.014 15:30:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.920 15:30:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:48.858 15:30:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:48.858 15:30:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:48.858 15:30:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.858 15:30:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:48.858 15:30:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:51.395 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.396 15:30:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:51.964 15:30:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:51.964 15:30:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:51.964 15:30:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.964 15:30:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:51.964 15:30:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.874 15:30:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:55.254 15:30:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:55.254 15:30:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:55.254 15:30:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.254 15:30:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:55.254 15:30:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.162 15:30:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:58.100 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:58.100 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:25:58.100 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.100 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:25:58.100 15:30:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.019 15:30:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:26:01.049 15:30:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:01.049 15:30:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:01.049 15:30:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.049 15:30:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:01.049 15:30:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.956 15:30:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:26:03.894 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:03.894 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:03.894 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.894 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:03.894 15:30:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.434 15:30:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:26:07.002 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:07.002 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:07.003 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.003 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:07.003 15:30:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.912 15:30:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:26:10.292 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:10.292 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:10.292 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.292 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:10.292 15:30:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.201 15:30:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:26:13.139 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:13.139 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:13.139 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.139 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:13.139 15:30:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.047 15:30:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:26:15.983 15:30:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:15.983 15:30:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:15.983 15:30:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.983 15:30:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:15.983 15:30:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:18.521 15:30:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:18.521 [global] 00:26:18.521 thread=1 00:26:18.521 invalidate=1 00:26:18.521 rw=read 00:26:18.521 time_based=1 00:26:18.521 runtime=10 00:26:18.521 ioengine=libaio 00:26:18.521 direct=1 00:26:18.521 bs=262144 00:26:18.521 iodepth=64 00:26:18.521 norandommap=1 00:26:18.521 numjobs=1 00:26:18.521 00:26:18.521 [job0] 00:26:18.521 filename=/dev/nvme0n1 00:26:18.521 [job1] 00:26:18.521 filename=/dev/nvme10n1 00:26:18.521 [job2] 00:26:18.521 filename=/dev/nvme1n1 00:26:18.521 [job3] 00:26:18.521 filename=/dev/nvme2n1 00:26:18.521 [job4] 00:26:18.521 filename=/dev/nvme3n1 00:26:18.521 [job5] 00:26:18.521 filename=/dev/nvme4n1 00:26:18.521 [job6] 00:26:18.521 filename=/dev/nvme5n1 00:26:18.521 [job7] 00:26:18.521 filename=/dev/nvme6n1 00:26:18.521 [job8] 00:26:18.521 filename=/dev/nvme7n1 00:26:18.521 [job9] 00:26:18.521 filename=/dev/nvme8n1 00:26:18.521 [job10] 00:26:18.521 filename=/dev/nvme9n1 00:26:18.521 Could not set queue depth (nvme0n1) 00:26:18.521 Could not set queue depth (nvme10n1) 00:26:18.521 Could not set queue depth (nvme1n1) 00:26:18.521 Could not set queue depth (nvme2n1) 00:26:18.521 Could not set queue depth (nvme3n1) 00:26:18.521 Could not set queue depth (nvme4n1) 00:26:18.521 Could not set queue depth (nvme5n1) 00:26:18.521 Could not set queue depth (nvme6n1) 00:26:18.521 Could not set queue depth (nvme7n1) 00:26:18.521 Could not set queue depth (nvme8n1) 00:26:18.521 Could not set queue depth (nvme9n1) 00:26:18.521 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:18.521 fio-3.35 00:26:18.521 Starting 11 threads 00:26:30.733 00:26:30.733 job0: (groupid=0, jobs=1): err= 0: pid=3170686: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=1020, BW=255MiB/s (268MB/s)(2566MiB/10053msec) 00:26:30.733 slat (usec): min=11, max=93353, avg=686.51, stdev=3289.63 00:26:30.733 clat (usec): min=649, max=179511, avg=61939.43, stdev=31256.57 00:26:30.733 lat (usec): min=690, max=222246, avg=62625.95, stdev=31666.79 00:26:30.733 clat percentiles (usec): 00:26:30.733 | 1.00th=[ 1844], 5.00th=[ 11076], 10.00th=[ 19792], 20.00th=[ 35914], 00:26:30.733 | 30.00th=[ 42730], 40.00th=[ 51643], 50.00th=[ 56886], 60.00th=[ 69731], 00:26:30.733 | 70.00th=[ 81265], 80.00th=[ 91751], 90.00th=[104334], 95.00th=[112722], 00:26:30.733 | 99.00th=[133694], 99.50th=[137364], 99.90th=[145753], 99.95th=[154141], 00:26:30.733 | 99.99th=[164627] 00:26:30.733 bw ( KiB/s): min=168960, max=482816, per=7.96%, avg=261100.75, stdev=86287.46, samples=20 00:26:30.733 iops : min= 660, max= 1886, avg=1019.90, stdev=337.08, samples=20 00:26:30.733 lat (usec) : 750=0.01%, 1000=0.05% 00:26:30.733 lat (msec) : 2=1.07%, 4=1.73%, 10=1.72%, 20=5.61%, 50=26.61% 00:26:30.733 lat (msec) : 100=50.64%, 250=12.55% 00:26:30.733 cpu : usr=0.45%, sys=4.22%, ctx=3671, majf=0, minf=3815 00:26:30.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:30.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.733 issued rwts: total=10263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.733 job1: (groupid=0, jobs=1): err= 0: pid=3170687: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=1152, BW=288MiB/s (302MB/s)(2902MiB/10073msec) 00:26:30.733 slat (usec): min=11, max=63482, avg=698.59, stdev=2722.05 00:26:30.733 clat (usec): min=785, max=156956, avg=54772.30, stdev=25843.68 00:26:30.733 lat (usec): min=825, max=160106, avg=55470.89, stdev=26275.94 00:26:30.733 clat percentiles (msec): 00:26:30.733 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 35], 00:26:30.733 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 54], 60.00th=[ 59], 00:26:30.733 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 93], 95.00th=[ 99], 00:26:30.733 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 134], 99.95th=[ 138], 00:26:30.733 | 99.99th=[ 157] 00:26:30.733 bw ( KiB/s): min=175104, max=658944, per=9.00%, avg=295497.60, stdev=117219.83, samples=20 00:26:30.733 iops : min= 684, max= 2574, avg=1154.25, stdev=457.89, samples=20 00:26:30.733 lat (usec) : 1000=0.04% 00:26:30.733 lat (msec) : 2=0.33%, 4=1.31%, 10=2.45%, 20=5.38%, 50=35.81% 00:26:30.733 lat (msec) : 100=50.96%, 250=3.71% 00:26:30.733 cpu : usr=0.28%, sys=5.21%, ctx=3637, majf=0, minf=4097 00:26:30.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:30.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.733 issued rwts: total=11608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.733 job2: (groupid=0, jobs=1): err= 0: pid=3170688: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=1009, BW=252MiB/s (265MB/s)(2537MiB/10053msec) 00:26:30.733 slat (usec): min=11, max=107903, avg=767.14, stdev=3522.18 00:26:30.733 clat (usec): min=530, max=230877, avg=62573.24, stdev=32599.75 00:26:30.733 lat (usec): min=569, max=230926, avg=63340.39, stdev=33144.58 00:26:30.733 clat percentiles (msec): 00:26:30.733 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 34], 00:26:30.733 | 30.00th=[ 39], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 75], 00:26:30.733 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 115], 00:26:30.733 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:26:30.733 | 99.99th=[ 150] 00:26:30.733 bw ( KiB/s): min=151040, max=480256, per=7.87%, avg=258106.00, stdev=77398.10, samples=20 00:26:30.733 iops : min= 590, max= 1876, avg=1008.20, stdev=302.36, samples=20 00:26:30.733 lat (usec) : 750=0.02%, 1000=0.04% 00:26:30.733 lat (msec) : 2=0.89%, 4=1.75%, 10=2.16%, 20=4.91%, 50=29.53% 00:26:30.733 lat (msec) : 100=47.03%, 250=13.67% 00:26:30.733 cpu : usr=0.26%, sys=4.25%, ctx=3591, majf=0, minf=4097 00:26:30.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:30.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.733 issued rwts: total=10146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.733 job3: (groupid=0, jobs=1): err= 0: pid=3170689: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=1608, BW=402MiB/s (422MB/s)(4051MiB/10073msec) 00:26:30.733 slat (usec): min=10, max=44689, avg=534.23, stdev=1874.45 00:26:30.733 clat (usec): min=526, max=160978, avg=39204.41, stdev=26162.57 00:26:30.733 lat (usec): min=545, max=178520, avg=39738.63, stdev=26502.56 00:26:30.733 clat percentiles (msec): 00:26:30.733 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 19], 00:26:30.733 | 30.00th=[ 20], 40.00th=[ 24], 50.00th=[ 34], 60.00th=[ 39], 00:26:30.733 | 70.00th=[ 48], 80.00th=[ 57], 90.00th=[ 83], 95.00th=[ 93], 00:26:30.733 | 99.00th=[ 118], 99.50th=[ 128], 99.90th=[ 146], 99.95th=[ 150], 00:26:30.733 | 99.99th=[ 157] 00:26:30.733 bw ( KiB/s): min=177152, max=890634, per=12.59%, avg=413094.90, stdev=204379.60, samples=20 00:26:30.733 iops : min= 692, max= 3479, avg=1613.65, stdev=798.35, samples=20 00:26:30.733 lat (usec) : 750=0.01%, 1000=0.02% 00:26:30.733 lat (msec) : 2=0.57%, 4=1.44%, 10=2.18%, 20=31.03%, 50=36.55% 00:26:30.733 lat (msec) : 100=25.35%, 250=2.85% 00:26:30.733 cpu : usr=0.46%, sys=5.42%, ctx=3932, majf=0, minf=4097 00:26:30.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:30.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.733 issued rwts: total=16203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.733 job4: (groupid=0, jobs=1): err= 0: pid=3170690: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=1085, BW=271MiB/s (285MB/s)(2728MiB/10052msec) 00:26:30.733 slat (usec): min=11, max=62865, avg=736.57, stdev=3221.21 00:26:30.733 clat (usec): min=795, max=160296, avg=58166.37, stdev=31330.15 00:26:30.733 lat (usec): min=838, max=188113, avg=58902.94, stdev=31831.49 00:26:30.733 clat percentiles (msec): 00:26:30.733 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 16], 20.00th=[ 33], 00:26:30.733 | 30.00th=[ 39], 40.00th=[ 49], 50.00th=[ 57], 60.00th=[ 64], 00:26:30.733 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 111], 00:26:30.733 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 157], 00:26:30.733 | 99.99th=[ 161] 00:26:30.733 bw ( KiB/s): min=147456, max=462848, per=8.46%, avg=277668.25, stdev=97121.66, samples=20 00:26:30.733 iops : min= 576, max= 1808, avg=1084.60, stdev=379.42, samples=20 00:26:30.733 lat (usec) : 1000=0.01% 00:26:30.733 lat (msec) : 2=0.68%, 4=1.32%, 10=4.71%, 20=6.89%, 50=28.21% 00:26:30.733 lat (msec) : 100=48.63%, 250=9.55% 00:26:30.733 cpu : usr=0.48%, sys=4.11%, ctx=3637, majf=0, minf=4097 00:26:30.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:30.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.733 issued rwts: total=10910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.733 job5: (groupid=0, jobs=1): err= 0: pid=3170691: Wed Nov 6 15:30:56 2024 00:26:30.733 read: IOPS=974, BW=244MiB/s (255MB/s)(2454MiB/10072msec) 00:26:30.733 slat (usec): min=11, max=82660, avg=798.55, stdev=3000.69 00:26:30.733 clat (usec): min=556, max=164366, avg=64807.52, stdev=23851.91 00:26:30.733 lat (usec): min=649, max=205595, avg=65606.07, stdev=24270.37 00:26:30.733 clat percentiles (msec): 00:26:30.733 | 1.00th=[ 4], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 47], 00:26:30.733 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 70], 00:26:30.733 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 103], 00:26:30.733 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 157], 00:26:30.733 | 99.99th=[ 165] 00:26:30.733 bw ( KiB/s): min=159232, max=480256, per=7.61%, avg=249631.70, stdev=69525.69, samples=20 00:26:30.733 iops : min= 622, max= 1876, avg=975.10, stdev=271.59, samples=20 00:26:30.733 lat (usec) : 750=0.01% 00:26:30.733 lat (msec) : 2=0.29%, 4=0.77%, 10=1.33%, 20=0.95%, 50=21.23% 00:26:30.734 lat (msec) : 100=69.24%, 250=6.17% 00:26:30.734 cpu : usr=0.43%, sys=4.22%, ctx=3182, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=9814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 job6: (groupid=0, jobs=1): err= 0: pid=3170692: Wed Nov 6 15:30:56 2024 00:26:30.734 read: IOPS=1009, BW=252MiB/s (265MB/s)(2542MiB/10071msec) 00:26:30.734 slat (usec): min=11, max=61597, avg=708.48, stdev=2706.91 00:26:30.734 clat (usec): min=528, max=159008, avg=62623.80, stdev=30641.83 00:26:30.734 lat (usec): min=566, max=181332, avg=63332.28, stdev=31047.79 00:26:30.734 clat percentiles (msec): 00:26:30.734 | 1.00th=[ 14], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 35], 00:26:30.734 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 74], 00:26:30.734 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 115], 00:26:30.734 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 159], 00:26:30.734 | 99.99th=[ 159] 00:26:30.734 bw ( KiB/s): min=158208, max=680960, per=7.88%, avg=258610.65, stdev=117868.77, samples=20 00:26:30.734 iops : min= 618, max= 2660, avg=1010.15, stdev=460.42, samples=20 00:26:30.734 lat (usec) : 750=0.03% 00:26:30.734 lat (msec) : 2=0.30%, 4=0.25%, 10=0.04%, 20=6.34%, 50=31.21% 00:26:30.734 lat (msec) : 100=51.51%, 250=10.32% 00:26:30.734 cpu : usr=0.44%, sys=4.26%, ctx=3113, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=10166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 job7: (groupid=0, jobs=1): err= 0: pid=3170693: Wed Nov 6 15:30:56 2024 00:26:30.734 read: IOPS=1049, BW=262MiB/s (275MB/s)(2643MiB/10073msec) 00:26:30.734 slat (usec): min=11, max=59579, avg=754.48, stdev=2704.14 00:26:30.734 clat (usec): min=1502, max=164226, avg=60157.38, stdev=24647.51 00:26:30.734 lat (usec): min=1563, max=164262, avg=60911.86, stdev=25074.14 00:26:30.734 clat percentiles (msec): 00:26:30.734 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 40], 00:26:30.734 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 65], 00:26:30.734 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 101], 00:26:30.734 | 99.00th=[ 128], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 157], 00:26:30.734 | 99.99th=[ 157] 00:26:30.734 bw ( KiB/s): min=166400, max=498688, per=8.20%, avg=268972.15, stdev=84416.13, samples=20 00:26:30.734 iops : min= 650, max= 1948, avg=1050.65, stdev=329.73, samples=20 00:26:30.734 lat (msec) : 2=0.03%, 4=0.50%, 10=1.65%, 20=4.10%, 50=26.05% 00:26:30.734 lat (msec) : 100=62.62%, 250=5.05% 00:26:30.734 cpu : usr=0.41%, sys=4.22%, ctx=3238, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=10571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 job8: (groupid=0, jobs=1): err= 0: pid=3170695: Wed Nov 6 15:30:56 2024 00:26:30.734 read: IOPS=1480, BW=370MiB/s (388MB/s)(3707MiB/10015msec) 00:26:30.734 slat (usec): min=10, max=65637, avg=502.29, stdev=2050.46 00:26:30.734 clat (usec): min=887, max=177200, avg=42681.45, stdev=29554.62 00:26:30.734 lat (usec): min=941, max=206359, avg=43183.74, stdev=29870.00 00:26:30.734 clat percentiles (msec): 00:26:30.734 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 19], 00:26:30.734 | 30.00th=[ 22], 40.00th=[ 30], 50.00th=[ 36], 60.00th=[ 40], 00:26:30.734 | 70.00th=[ 51], 80.00th=[ 59], 90.00th=[ 91], 95.00th=[ 110], 00:26:30.734 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 159], 00:26:30.734 | 99.99th=[ 176] 00:26:30.734 bw ( KiB/s): min=144384, max=865280, per=11.52%, avg=377967.00, stdev=198646.34, samples=20 00:26:30.734 iops : min= 564, max= 3380, avg=1476.40, stdev=776.00, samples=20 00:26:30.734 lat (usec) : 1000=0.01% 00:26:30.734 lat (msec) : 2=0.41%, 4=1.59%, 10=2.45%, 20=22.13%, 50=43.47% 00:26:30.734 lat (msec) : 100=22.69%, 250=7.26% 00:26:30.734 cpu : usr=0.42%, sys=5.08%, ctx=4627, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=14828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 job9: (groupid=0, jobs=1): err= 0: pid=3170701: Wed Nov 6 15:30:56 2024 00:26:30.734 read: IOPS=1090, BW=273MiB/s (286MB/s)(2734MiB/10024msec) 00:26:30.734 slat (usec): min=11, max=78364, avg=784.24, stdev=3514.12 00:26:30.734 clat (usec): min=1540, max=172714, avg=57817.31, stdev=32453.38 00:26:30.734 lat (usec): min=1629, max=183054, avg=58601.56, stdev=33081.24 00:26:30.734 clat percentiles (msec): 00:26:30.734 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 30], 00:26:30.734 | 30.00th=[ 35], 40.00th=[ 40], 50.00th=[ 50], 60.00th=[ 60], 00:26:30.734 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 104], 95.00th=[ 116], 00:26:30.734 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 167], 00:26:30.734 | 99.99th=[ 169] 00:26:30.734 bw ( KiB/s): min=143584, max=540160, per=8.48%, avg=278308.80, stdev=129887.09, samples=20 00:26:30.734 iops : min= 560, max= 2110, avg=1087.10, stdev=507.42, samples=20 00:26:30.734 lat (msec) : 2=0.12%, 4=0.54%, 10=1.85%, 20=6.52%, 50=41.48% 00:26:30.734 lat (msec) : 100=37.50%, 250=11.99% 00:26:30.734 cpu : usr=0.32%, sys=3.99%, ctx=3162, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=10935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 job10: (groupid=0, jobs=1): err= 0: pid=3170702: Wed Nov 6 15:30:56 2024 00:26:30.734 read: IOPS=1360, BW=340MiB/s (357MB/s)(3419MiB/10052msec) 00:26:30.734 slat (usec): min=10, max=116471, avg=662.44, stdev=2868.30 00:26:30.734 clat (usec): min=1334, max=239901, avg=46329.06, stdev=24814.07 00:26:30.734 lat (usec): min=1400, max=239970, avg=46991.50, stdev=25272.20 00:26:30.734 clat percentiles (msec): 00:26:30.734 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 24], 00:26:30.734 | 30.00th=[ 35], 40.00th=[ 38], 50.00th=[ 42], 60.00th=[ 50], 00:26:30.734 | 70.00th=[ 56], 80.00th=[ 63], 90.00th=[ 79], 95.00th=[ 93], 00:26:30.734 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 153], 00:26:30.734 | 99.99th=[ 236] 00:26:30.734 bw ( KiB/s): min=141312, max=603465, per=10.62%, avg=348472.00, stdev=114424.15, samples=20 00:26:30.734 iops : min= 552, max= 2357, avg=1361.20, stdev=446.93, samples=20 00:26:30.734 lat (msec) : 2=0.20%, 4=0.23%, 10=0.99%, 20=13.57%, 50=45.85% 00:26:30.734 lat (msec) : 100=36.19%, 250=2.97% 00:26:30.734 cpu : usr=0.51%, sys=5.55%, ctx=3312, majf=0, minf=4097 00:26:30.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:30.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.734 issued rwts: total=13676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.734 00:26:30.734 Run status group 0 (all jobs): 00:26:30.734 READ: bw=3205MiB/s (3360MB/s), 244MiB/s-402MiB/s (255MB/s-422MB/s), io=31.5GiB (33.8GB), run=10015-10073msec 00:26:30.734 00:26:30.734 Disk stats (read/write): 00:26:30.734 nvme0n1: ios=20317/0, merge=0/0, ticks=1238396/0, in_queue=1238396, util=97.44% 00:26:30.734 nvme10n1: ios=23026/0, merge=0/0, ticks=1235295/0, in_queue=1235295, util=97.63% 00:26:30.734 nvme1n1: ios=20012/0, merge=0/0, ticks=1237511/0, in_queue=1237511, util=97.88% 00:26:30.734 nvme2n1: ios=32203/0, merge=0/0, ticks=1230142/0, in_queue=1230142, util=98.01% 00:26:30.734 nvme3n1: ios=21570/0, merge=0/0, ticks=1232510/0, in_queue=1232510, util=98.07% 00:26:30.734 nvme4n1: ios=19476/0, merge=0/0, ticks=1237668/0, in_queue=1237668, util=98.36% 00:26:30.734 nvme5n1: ios=20162/0, merge=0/0, ticks=1238808/0, in_queue=1238808, util=98.50% 00:26:30.734 nvme6n1: ios=20957/0, merge=0/0, ticks=1234104/0, in_queue=1234104, util=98.62% 00:26:30.734 nvme7n1: ios=29263/0, merge=0/0, ticks=1238588/0, in_queue=1238588, util=98.97% 00:26:30.734 nvme8n1: ios=21601/0, merge=0/0, ticks=1234410/0, in_queue=1234410, util=99.12% 00:26:30.734 nvme9n1: ios=27140/0, merge=0/0, ticks=1232117/0, in_queue=1232117, util=99.25% 00:26:30.734 15:30:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:30.734 [global] 00:26:30.734 thread=1 00:26:30.734 invalidate=1 00:26:30.734 rw=randwrite 00:26:30.734 time_based=1 00:26:30.734 runtime=10 00:26:30.734 ioengine=libaio 00:26:30.734 direct=1 00:26:30.734 bs=262144 00:26:30.734 iodepth=64 00:26:30.734 norandommap=1 00:26:30.734 numjobs=1 00:26:30.734 00:26:30.734 [job0] 00:26:30.734 filename=/dev/nvme0n1 00:26:30.734 [job1] 00:26:30.734 filename=/dev/nvme10n1 00:26:30.734 [job2] 00:26:30.734 filename=/dev/nvme1n1 00:26:30.734 [job3] 00:26:30.734 filename=/dev/nvme2n1 00:26:30.734 [job4] 00:26:30.734 filename=/dev/nvme3n1 00:26:30.734 [job5] 00:26:30.734 filename=/dev/nvme4n1 00:26:30.734 [job6] 00:26:30.734 filename=/dev/nvme5n1 00:26:30.734 [job7] 00:26:30.734 filename=/dev/nvme6n1 00:26:30.734 [job8] 00:26:30.734 filename=/dev/nvme7n1 00:26:30.734 [job9] 00:26:30.734 filename=/dev/nvme8n1 00:26:30.734 [job10] 00:26:30.734 filename=/dev/nvme9n1 00:26:30.734 Could not set queue depth (nvme0n1) 00:26:30.734 Could not set queue depth (nvme10n1) 00:26:30.734 Could not set queue depth (nvme1n1) 00:26:30.734 Could not set queue depth (nvme2n1) 00:26:30.734 Could not set queue depth (nvme3n1) 00:26:30.734 Could not set queue depth (nvme4n1) 00:26:30.735 Could not set queue depth (nvme5n1) 00:26:30.735 Could not set queue depth (nvme6n1) 00:26:30.735 Could not set queue depth (nvme7n1) 00:26:30.735 Could not set queue depth (nvme8n1) 00:26:30.735 Could not set queue depth (nvme9n1) 00:26:30.735 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.735 fio-3.35 00:26:30.735 Starting 11 threads 00:26:40.726 00:26:40.726 job0: (groupid=0, jobs=1): err= 0: pid=3172017: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=443, BW=111MiB/s (116MB/s)(1119MiB/10097msec); 0 zone resets 00:26:40.726 slat (usec): min=30, max=29937, avg=2217.43, stdev=4128.51 00:26:40.726 clat (usec): min=930, max=235837, avg=142088.33, stdev=32321.46 00:26:40.726 lat (usec): min=1110, max=235894, avg=144305.76, stdev=32685.94 00:26:40.726 clat percentiles (msec): 00:26:40.726 | 1.00th=[ 46], 5.00th=[ 79], 10.00th=[ 88], 20.00th=[ 123], 00:26:40.726 | 30.00th=[ 136], 40.00th=[ 144], 50.00th=[ 153], 60.00th=[ 157], 00:26:40.726 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:26:40.726 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 226], 99.95th=[ 226], 00:26:40.726 | 99.99th=[ 236] 00:26:40.726 bw ( KiB/s): min=93696, max=201216, per=4.30%, avg=112984.30, stdev=23998.45, samples=20 00:26:40.726 iops : min= 366, max= 786, avg=441.30, stdev=93.74, samples=20 00:26:40.726 lat (usec) : 1000=0.02% 00:26:40.726 lat (msec) : 2=0.16%, 10=0.07%, 20=0.16%, 50=0.67%, 100=12.11% 00:26:40.726 lat (msec) : 250=86.82% 00:26:40.726 cpu : usr=1.36%, sys=1.71%, ctx=1124, majf=0, minf=144 00:26:40.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:40.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.726 issued rwts: total=0,4476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.726 job1: (groupid=0, jobs=1): err= 0: pid=3172050: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=1336, BW=334MiB/s (350MB/s)(3373MiB/10094msec); 0 zone resets 00:26:40.726 slat (usec): min=16, max=84787, avg=577.47, stdev=2443.60 00:26:40.726 clat (usec): min=518, max=249350, avg=47282.99, stdev=48299.76 00:26:40.726 lat (usec): min=567, max=258921, avg=47860.46, stdev=48975.32 00:26:40.726 clat percentiles (msec): 00:26:40.726 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 19], 00:26:40.726 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 35], 00:26:40.726 | 70.00th=[ 48], 80.00th=[ 66], 90.00th=[ 150], 95.00th=[ 167], 00:26:40.726 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 222], 99.95th=[ 241], 00:26:40.726 | 99.99th=[ 249] 00:26:40.726 bw ( KiB/s): min=93696, max=796672, per=13.08%, avg=343795.90, stdev=244437.45, samples=20 00:26:40.726 iops : min= 366, max= 3112, avg=1342.95, stdev=954.84, samples=20 00:26:40.726 lat (usec) : 750=0.07%, 1000=0.08% 00:26:40.726 lat (msec) : 2=0.59%, 4=1.33%, 10=5.47%, 20=27.23%, 50=36.49% 00:26:40.726 lat (msec) : 100=15.75%, 250=13.00% 00:26:40.726 cpu : usr=3.15%, sys=4.68%, ctx=4148, majf=0, minf=144 00:26:40.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:40.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.726 issued rwts: total=0,13493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.726 job2: (groupid=0, jobs=1): err= 0: pid=3172073: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=738, BW=185MiB/s (193MB/s)(1864MiB/10100msec); 0 zone resets 00:26:40.726 slat (usec): min=20, max=60989, avg=1187.26, stdev=3138.12 00:26:40.726 clat (usec): min=726, max=230321, avg=85475.83, stdev=62370.44 00:26:40.726 lat (usec): min=821, max=230416, avg=86663.10, stdev=63248.78 00:26:40.726 clat percentiles (usec): 00:26:40.726 | 1.00th=[ 1614], 5.00th=[ 5473], 10.00th=[ 11731], 20.00th=[ 20055], 00:26:40.726 | 30.00th=[ 25560], 40.00th=[ 46400], 50.00th=[ 73925], 60.00th=[122160], 00:26:40.726 | 70.00th=[139461], 80.00th=[156238], 90.00th=[166724], 95.00th=[170918], 00:26:40.726 | 99.00th=[185598], 99.50th=[196084], 99.90th=[221250], 99.95th=[223347], 00:26:40.726 | 99.99th=[229639] 00:26:40.726 bw ( KiB/s): min=93184, max=751104, per=7.20%, avg=189235.20, stdev=174566.23, samples=20 00:26:40.726 iops : min= 364, max= 2934, avg=739.20, stdev=681.90, samples=20 00:26:40.726 lat (usec) : 750=0.01%, 1000=0.09% 00:26:40.726 lat (msec) : 2=1.30%, 4=2.35%, 10=5.26%, 20=11.31%, 50=20.97% 00:26:40.726 lat (msec) : 100=14.47%, 250=44.24% 00:26:40.726 cpu : usr=2.25%, sys=3.14%, ctx=2258, majf=0, minf=416 00:26:40.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:40.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.726 issued rwts: total=0,7455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.726 job3: (groupid=0, jobs=1): err= 0: pid=3172086: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=893, BW=223MiB/s (234MB/s)(2240MiB/10029msec); 0 zone resets 00:26:40.726 slat (usec): min=19, max=132581, avg=960.96, stdev=3310.67 00:26:40.726 clat (usec): min=595, max=299669, avg=70648.39, stdev=61654.94 00:26:40.726 lat (usec): min=829, max=299726, avg=71609.35, stdev=62493.32 00:26:40.726 clat percentiles (usec): 00:26:40.726 | 1.00th=[ 1582], 5.00th=[ 3621], 10.00th=[ 5997], 20.00th=[ 17433], 00:26:40.726 | 30.00th=[ 24773], 40.00th=[ 37487], 50.00th=[ 40633], 60.00th=[ 62129], 00:26:40.726 | 70.00th=[110625], 80.00th=[147850], 90.00th=[164627], 95.00th=[173016], 00:26:40.726 | 99.00th=[193987], 99.50th=[210764], 99.90th=[283116], 99.95th=[295699], 00:26:40.726 | 99.99th=[299893] 00:26:40.726 bw ( KiB/s): min=94208, max=698368, per=8.67%, avg=227788.80, stdev=195086.10, samples=20 00:26:40.726 iops : min= 368, max= 2728, avg=889.80, stdev=762.06, samples=20 00:26:40.726 lat (usec) : 750=0.02%, 1000=0.23% 00:26:40.726 lat (msec) : 2=2.04%, 4=3.58%, 10=7.59%, 20=14.03%, 50=30.57% 00:26:40.726 lat (msec) : 100=10.78%, 250=30.89%, 500=0.27% 00:26:40.726 cpu : usr=2.18%, sys=3.23%, ctx=2773, majf=0, minf=199 00:26:40.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:40.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.726 issued rwts: total=0,8961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.726 job4: (groupid=0, jobs=1): err= 0: pid=3172092: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=943, BW=236MiB/s (247MB/s)(2367MiB/10034msec); 0 zone resets 00:26:40.726 slat (usec): min=25, max=64897, avg=962.96, stdev=2747.17 00:26:40.726 clat (usec): min=521, max=241072, avg=66851.96, stdev=48457.71 00:26:40.726 lat (usec): min=585, max=241137, avg=67814.92, stdev=49188.46 00:26:40.726 clat percentiles (msec): 00:26:40.726 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 36], 20.00th=[ 38], 00:26:40.726 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 42], 60.00th=[ 48], 00:26:40.726 | 70.00th=[ 64], 80.00th=[ 104], 90.00th=[ 161], 95.00th=[ 171], 00:26:40.726 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 228], 00:26:40.726 | 99.99th=[ 241] 00:26:40.726 bw ( KiB/s): min=91136, max=434176, per=9.16%, avg=240728.25, stdev=140614.36, samples=20 00:26:40.726 iops : min= 356, max= 1696, avg=940.30, stdev=549.32, samples=20 00:26:40.726 lat (usec) : 750=0.13%, 1000=0.04% 00:26:40.726 lat (msec) : 2=0.34%, 4=0.24%, 10=1.04%, 20=3.93%, 50=55.53% 00:26:40.726 lat (msec) : 100=18.04%, 250=20.72% 00:26:40.726 cpu : usr=2.99%, sys=3.86%, ctx=2451, majf=0, minf=537 00:26:40.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:40.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.726 issued rwts: total=0,9466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.726 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.726 job5: (groupid=0, jobs=1): err= 0: pid=3172111: Wed Nov 6 15:31:07 2024 00:26:40.726 write: IOPS=1850, BW=463MiB/s (485MB/s)(4642MiB/10036msec); 0 zone resets 00:26:40.726 slat (usec): min=19, max=116976, avg=491.13, stdev=1879.83 00:26:40.726 clat (usec): min=731, max=258608, avg=34085.18, stdev=25646.14 00:26:40.726 lat (usec): min=919, max=268423, avg=34576.30, stdev=25929.08 00:26:40.726 clat percentiles (msec): 00:26:40.726 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 20], 00:26:40.726 | 30.00th=[ 21], 40.00th=[ 26], 50.00th=[ 34], 60.00th=[ 37], 00:26:40.726 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 57], 00:26:40.726 | 99.00th=[ 174], 99.50th=[ 190], 99.90th=[ 239], 99.95th=[ 245], 00:26:40.726 | 99.99th=[ 257] 00:26:40.726 bw ( KiB/s): min=142848, max=921600, per=18.03%, avg=473753.60, stdev=189992.71, samples=20 00:26:40.726 iops : min= 558, max= 3600, avg=1850.60, stdev=742.16, samples=20 00:26:40.726 lat (usec) : 750=0.01%, 1000=0.03% 00:26:40.726 lat (msec) : 2=0.32%, 4=1.00%, 10=2.73%, 20=21.63%, 50=66.37% 00:26:40.726 lat (msec) : 100=5.43%, 250=2.44%, 500=0.04% 00:26:40.726 cpu : usr=5.33%, sys=5.89%, ctx=4289, majf=0, minf=145 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,18569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 job6: (groupid=0, jobs=1): err= 0: pid=3172123: Wed Nov 6 15:31:07 2024 00:26:40.727 write: IOPS=870, BW=218MiB/s (228MB/s)(2199MiB/10099msec); 0 zone resets 00:26:40.727 slat (usec): min=26, max=122789, avg=881.76, stdev=3495.39 00:26:40.727 clat (usec): min=287, max=298246, avg=72575.31, stdev=59623.12 00:26:40.727 lat (usec): min=328, max=298304, avg=73457.06, stdev=60405.00 00:26:40.727 clat percentiles (usec): 00:26:40.727 | 1.00th=[ 1450], 5.00th=[ 5211], 10.00th=[ 14484], 20.00th=[ 29754], 00:26:40.727 | 30.00th=[ 36963], 40.00th=[ 38536], 50.00th=[ 42206], 60.00th=[ 53740], 00:26:40.727 | 70.00th=[ 78119], 80.00th=[152044], 90.00th=[166724], 95.00th=[175113], 00:26:40.727 | 99.00th=[210764], 99.50th=[221250], 99.90th=[270533], 99.95th=[274727], 00:26:40.727 | 99.99th=[299893] 00:26:40.727 bw ( KiB/s): min=94208, max=552960, per=8.50%, avg=223513.60, stdev=130577.75, samples=20 00:26:40.727 iops : min= 368, max= 2160, avg=873.10, stdev=510.07, samples=20 00:26:40.727 lat (usec) : 500=0.13%, 750=0.19%, 1000=0.35% 00:26:40.727 lat (msec) : 2=0.90%, 4=2.15%, 10=4.56%, 20=4.05%, 50=44.17% 00:26:40.727 lat (msec) : 100=14.71%, 250=28.59%, 500=0.20% 00:26:40.727 cpu : usr=2.11%, sys=3.50%, ctx=2774, majf=0, minf=202 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,8794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 job7: (groupid=0, jobs=1): err= 0: pid=3172131: Wed Nov 6 15:31:07 2024 00:26:40.727 write: IOPS=517, BW=129MiB/s (136MB/s)(1306MiB/10099msec); 0 zone resets 00:26:40.727 slat (usec): min=29, max=115330, avg=1719.45, stdev=4289.26 00:26:40.727 clat (usec): min=271, max=268567, avg=121988.18, stdev=55234.58 00:26:40.727 lat (usec): min=316, max=268640, avg=123707.63, stdev=56109.97 00:26:40.727 clat percentiles (usec): 00:26:40.727 | 1.00th=[ 1037], 5.00th=[ 7898], 10.00th=[ 27919], 20.00th=[ 57410], 00:26:40.727 | 30.00th=[101188], 40.00th=[131597], 50.00th=[143655], 60.00th=[152044], 00:26:40.727 | 70.00th=[160433], 80.00th=[164627], 90.00th=[173016], 95.00th=[177210], 00:26:40.727 | 99.00th=[217056], 99.50th=[227541], 99.90th=[263193], 99.95th=[263193], 00:26:40.727 | 99.99th=[270533] 00:26:40.727 bw ( KiB/s): min=88576, max=290816, per=5.03%, avg=132070.40, stdev=52546.24, samples=20 00:26:40.727 iops : min= 346, max= 1136, avg=515.90, stdev=205.26, samples=20 00:26:40.727 lat (usec) : 500=0.59%, 750=0.11%, 1000=0.29% 00:26:40.727 lat (msec) : 2=1.24%, 4=1.46%, 10=1.78%, 20=1.36%, 50=10.07% 00:26:40.727 lat (msec) : 100=12.62%, 250=70.34%, 500=0.13% 00:26:40.727 cpu : usr=1.43%, sys=2.18%, ctx=1707, majf=0, minf=13 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,5222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 job8: (groupid=0, jobs=1): err= 0: pid=3172157: Wed Nov 6 15:31:07 2024 00:26:40.727 write: IOPS=450, BW=113MiB/s (118MB/s)(1138MiB/10101msec); 0 zone resets 00:26:40.727 slat (usec): min=28, max=52206, avg=2153.02, stdev=4290.52 00:26:40.727 clat (usec): min=1106, max=231379, avg=139763.18, stdev=36196.78 00:26:40.727 lat (usec): min=1191, max=231441, avg=141916.20, stdev=36660.34 00:26:40.727 clat percentiles (msec): 00:26:40.727 | 1.00th=[ 5], 5.00th=[ 65], 10.00th=[ 85], 20.00th=[ 117], 00:26:40.727 | 30.00th=[ 133], 40.00th=[ 142], 50.00th=[ 150], 60.00th=[ 159], 00:26:40.727 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:26:40.727 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 222], 99.95th=[ 222], 00:26:40.727 | 99.99th=[ 232] 00:26:40.727 bw ( KiB/s): min=91648, max=200704, per=4.37%, avg=114955.45, stdev=25938.58, samples=20 00:26:40.727 iops : min= 358, max= 784, avg=449.00, stdev=101.32, samples=20 00:26:40.727 lat (msec) : 2=0.22%, 4=0.35%, 10=0.57%, 20=0.11%, 50=1.10% 00:26:40.727 lat (msec) : 100=13.09%, 250=84.56% 00:26:40.727 cpu : usr=1.43%, sys=1.70%, ctx=1188, majf=0, minf=205 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 job9: (groupid=0, jobs=1): err= 0: pid=3172170: Wed Nov 6 15:31:07 2024 00:26:40.727 write: IOPS=1285, BW=321MiB/s (337MB/s)(3226MiB/10036msec); 0 zone resets 00:26:40.727 slat (usec): min=20, max=60007, avg=664.03, stdev=2395.03 00:26:40.727 clat (usec): min=317, max=212137, avg=49084.61, stdev=47255.37 00:26:40.727 lat (usec): min=367, max=212218, avg=49748.64, stdev=47853.54 00:26:40.727 clat percentiles (usec): 00:26:40.727 | 1.00th=[ 1942], 5.00th=[ 13698], 10.00th=[ 18744], 20.00th=[ 19530], 00:26:40.727 | 30.00th=[ 20055], 40.00th=[ 20579], 50.00th=[ 22152], 60.00th=[ 38536], 00:26:40.727 | 70.00th=[ 49021], 80.00th=[ 69731], 90.00th=[147850], 95.00th=[164627], 00:26:40.727 | 99.00th=[181404], 99.50th=[187696], 99.90th=[198181], 99.95th=[200279], 00:26:40.727 | 99.99th=[212861] 00:26:40.727 bw ( KiB/s): min=88576, max=813056, per=12.51%, avg=328767.40, stdev=249815.49, samples=20 00:26:40.727 iops : min= 346, max= 3176, avg=1284.20, stdev=975.88, samples=20 00:26:40.727 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.05% 00:26:40.727 lat (msec) : 2=0.93%, 4=0.58%, 10=2.29%, 20=23.72%, 50=42.89% 00:26:40.727 lat (msec) : 100=16.47%, 250=13.03% 00:26:40.727 cpu : usr=3.43%, sys=4.25%, ctx=3199, majf=0, minf=286 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,12905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 job10: (groupid=0, jobs=1): err= 0: pid=3172179: Wed Nov 6 15:31:07 2024 00:26:40.727 write: IOPS=970, BW=243MiB/s (254MB/s)(2451MiB/10101msec); 0 zone resets 00:26:40.727 slat (usec): min=22, max=119087, avg=865.36, stdev=3268.04 00:26:40.727 clat (usec): min=623, max=293456, avg=65039.84, stdev=57025.84 00:26:40.727 lat (usec): min=761, max=293514, avg=65905.21, stdev=57834.21 00:26:40.727 clat percentiles (usec): 00:26:40.727 | 1.00th=[ 1729], 5.00th=[ 5932], 10.00th=[ 10290], 20.00th=[ 17957], 00:26:40.727 | 30.00th=[ 24511], 40.00th=[ 35914], 50.00th=[ 41157], 60.00th=[ 55313], 00:26:40.727 | 70.00th=[ 74974], 80.00th=[135267], 90.00th=[160433], 95.00th=[168821], 00:26:40.727 | 99.00th=[214959], 99.50th=[242222], 99.90th=[274727], 99.95th=[274727], 00:26:40.727 | 99.99th=[291505] 00:26:40.727 bw ( KiB/s): min=98304, max=818688, per=9.49%, avg=249406.50, stdev=195466.10, samples=20 00:26:40.727 iops : min= 384, max= 3198, avg=974.20, stdev=763.51, samples=20 00:26:40.727 lat (usec) : 750=0.05%, 1000=0.20% 00:26:40.727 lat (msec) : 2=1.35%, 4=2.01%, 10=6.12%, 20=14.25%, 50=33.57% 00:26:40.727 lat (msec) : 100=18.03%, 250=23.99%, 500=0.43% 00:26:40.727 cpu : usr=2.18%, sys=3.84%, ctx=2959, majf=0, minf=140 00:26:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.727 issued rwts: total=0,9804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.727 00:26:40.727 Run status group 0 (all jobs): 00:26:40.727 WRITE: bw=2567MiB/s (2691MB/s), 111MiB/s-463MiB/s (116MB/s-485MB/s), io=25.3GiB (27.2GB), run=10029-10101msec 00:26:40.727 00:26:40.727 Disk stats (read/write): 00:26:40.727 nvme0n1: ios=49/8918, merge=0/0, ticks=26/1234736, in_queue=1234762, util=96.40% 00:26:40.727 nvme10n1: ios=0/26940, merge=0/0, ticks=0/1245146, in_queue=1245146, util=96.49% 00:26:40.727 nvme1n1: ios=0/14870, merge=0/0, ticks=0/1241196, in_queue=1241196, util=96.94% 00:26:40.727 nvme2n1: ios=0/17078, merge=0/0, ticks=0/1214084, in_queue=1214084, util=97.01% 00:26:40.727 nvme3n1: ios=0/18266, merge=0/0, ticks=0/1213823, in_queue=1213823, util=97.18% 00:26:40.727 nvme4n1: ios=0/36487, merge=0/0, ticks=0/1217174, in_queue=1217174, util=97.63% 00:26:40.727 nvme5n1: ios=0/17549, merge=0/0, ticks=0/1245924, in_queue=1245924, util=97.92% 00:26:40.727 nvme6n1: ios=0/10406, merge=0/0, ticks=0/1235351, in_queue=1235351, util=98.07% 00:26:40.727 nvme7n1: ios=0/9064, merge=0/0, ticks=0/1233051, in_queue=1233051, util=98.60% 00:26:40.727 nvme8n1: ios=0/25169, merge=0/0, ticks=0/1219193, in_queue=1219193, util=98.82% 00:26:40.727 nvme9n1: ios=0/19562, merge=0/0, ticks=0/1241236, in_queue=1241236, util=99.05% 00:26:40.727 15:31:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:40.727 15:31:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:40.727 15:31:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.727 15:31:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:41.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:41.295 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.296 15:31:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:42.232 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:42.232 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.233 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.233 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.233 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.233 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.233 15:31:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:43.169 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.169 15:31:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:44.107 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.107 15:31:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:45.045 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.045 15:31:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:45.984 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:45.984 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:45.984 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:45.984 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:45.984 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.244 15:31:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:47.182 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.182 15:31:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:48.120 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.120 15:31:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:49.057 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.057 15:31:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:49.995 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.995 15:31:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:50.933 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:50.933 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:50.933 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:26:50.933 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:26:50.933 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:51.192 rmmod nvme_rdma 00:26:51.192 rmmod nvme_fabrics 00:26:51.192 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3165674 ']' 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3165674 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 3165674 ']' 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 3165674 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3165674 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3165674' 00:26:51.193 killing process with pid 3165674 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 3165674 00:26:51.193 15:31:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 3165674 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:55.395 00:26:55.395 real 1m19.576s 00:26:55.395 user 4m53.232s 00:26:55.395 sys 0m19.380s 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.395 ************************************ 00:26:55.395 END TEST nvmf_multiconnection 00:26:55.395 ************************************ 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:55.395 ************************************ 00:26:55.395 START TEST nvmf_initiator_timeout 00:26:55.395 ************************************ 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:55.395 * Looking for test storage... 00:26:55.395 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.395 --rc genhtml_branch_coverage=1 00:26:55.395 --rc genhtml_function_coverage=1 00:26:55.395 --rc genhtml_legend=1 00:26:55.395 --rc geninfo_all_blocks=1 00:26:55.395 --rc geninfo_unexecuted_blocks=1 00:26:55.395 00:26:55.395 ' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.395 --rc genhtml_branch_coverage=1 00:26:55.395 --rc genhtml_function_coverage=1 00:26:55.395 --rc genhtml_legend=1 00:26:55.395 --rc geninfo_all_blocks=1 00:26:55.395 --rc geninfo_unexecuted_blocks=1 00:26:55.395 00:26:55.395 ' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.395 --rc genhtml_branch_coverage=1 00:26:55.395 --rc genhtml_function_coverage=1 00:26:55.395 --rc genhtml_legend=1 00:26:55.395 --rc geninfo_all_blocks=1 00:26:55.395 --rc geninfo_unexecuted_blocks=1 00:26:55.395 00:26:55.395 ' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.395 --rc genhtml_branch_coverage=1 00:26:55.395 --rc genhtml_function_coverage=1 00:26:55.395 --rc genhtml_legend=1 00:26:55.395 --rc geninfo_all_blocks=1 00:26:55.395 --rc geninfo_unexecuted_blocks=1 00:26:55.395 00:26:55.395 ' 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.395 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.396 15:31:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:27:01.970 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:27:01.970 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:27:01.970 Found net devices under 0000:18:00.0: mlx_0_0 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:27:01.970 Found net devices under 0000:18:00.1: mlx_0_1 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:01.970 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:01.971 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:01.971 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:27:01.971 altname enp24s0f0np0 00:27:01.971 altname ens785f0np0 00:27:01.971 inet 192.168.100.8/24 scope global mlx_0_0 00:27:01.971 valid_lft forever preferred_lft forever 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:01.971 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:01.971 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:27:01.971 altname enp24s0f1np1 00:27:01.971 altname ens785f1np1 00:27:01.971 inet 192.168.100.9/24 scope global mlx_0_1 00:27:01.971 valid_lft forever preferred_lft forever 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:01.971 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:02.230 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:02.231 192.168.100.9' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:02.231 192.168.100.9' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:02.231 192.168.100.9' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3178198 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3178198 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 3178198 ']' 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:02.231 15:31:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.231 [2024-11-06 15:31:29.808888] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:02.231 [2024-11-06 15:31:29.808999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.490 [2024-11-06 15:31:29.960422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:02.490 [2024-11-06 15:31:30.079395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.490 [2024-11-06 15:31:30.079455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.490 [2024-11-06 15:31:30.079469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.490 [2024-11-06 15:31:30.079484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.490 [2024-11-06 15:31:30.079495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.490 [2024-11-06 15:31:30.081689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.490 [2024-11-06 15:31:30.081777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.490 [2024-11-06 15:31:30.081839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.490 [2024-11-06 15:31:30.081799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.058 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.317 Malloc0 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.317 Delay0 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.317 15:31:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.317 [2024-11-06 15:31:30.804496] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a040/0x7fb3c959a940) succeed. 00:27:03.317 [2024-11-06 15:31:30.814802] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a1c0/0x7fb3c9556940) succeed. 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.577 [2024-11-06 15:31:31.109789] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.577 15:31:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:04.517 15:31:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:04.517 15:31:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:27:04.517 15:31:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.517 15:31:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:04.517 15:31:32 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3178782 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:06.549 15:31:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:06.549 [global] 00:27:06.549 thread=1 00:27:06.549 invalidate=1 00:27:06.549 rw=write 00:27:06.549 time_based=1 00:27:06.549 runtime=60 00:27:06.549 ioengine=libaio 00:27:06.549 direct=1 00:27:06.549 bs=4096 00:27:06.549 iodepth=1 00:27:06.549 norandommap=0 00:27:06.549 numjobs=1 00:27:06.549 00:27:06.549 verify_dump=1 00:27:06.549 verify_backlog=512 00:27:06.549 verify_state_save=0 00:27:06.549 do_verify=1 00:27:06.549 verify=crc32c-intel 00:27:06.549 [job0] 00:27:06.549 filename=/dev/nvme0n1 00:27:06.549 Could not set queue depth (nvme0n1) 00:27:06.808 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:06.808 fio-3.35 00:27:06.808 Starting 1 thread 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.102 true 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.102 true 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.102 true 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.102 true 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.102 15:31:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:12.639 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:12.639 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.639 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.639 true 00:27:12.639 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.640 true 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.640 true 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.640 true 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:12.640 15:31:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3178782 00:28:08.937 00:28:08.937 job0: (groupid=0, jobs=1): err= 0: pid=3178880: Wed Nov 6 15:32:34 2024 00:28:08.937 read: IOPS=1192, BW=4768KiB/s (4883kB/s)(279MiB/60000msec) 00:28:08.938 slat (usec): min=2, max=949, avg= 9.08, stdev= 3.81 00:28:08.938 clat (usec): min=37, max=375, avg=112.03, stdev= 6.70 00:28:08.938 lat (usec): min=92, max=987, avg=121.10, stdev= 7.86 00:28:08.938 clat percentiles (usec): 00:28:08.938 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:28:08.938 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 114], 00:28:08.938 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 123], 00:28:08.938 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 139], 99.95th=[ 159], 00:28:08.938 | 99.99th=[ 237] 00:28:08.938 write: IOPS=1194, BW=4779KiB/s (4893kB/s)(280MiB/60000msec); 0 zone resets 00:28:08.938 slat (usec): min=3, max=5092, avg=12.02, stdev=19.23 00:28:08.938 clat (usec): min=78, max=42345k, avg=699.06, stdev=158161.44 00:28:08.938 lat (usec): min=89, max=42345k, avg=711.07, stdev=158161.44 00:28:08.938 clat percentiles (usec): 00:28:08.938 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 103], 00:28:08.938 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 110], 00:28:08.938 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 117], 95.00th=[ 120], 00:28:08.938 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 145], 99.95th=[ 178], 00:28:08.938 | 99.99th=[ 277] 00:28:08.938 bw ( KiB/s): min= 3112, max=19600, per=100.00%, avg=15566.67, stdev=2911.48, samples=36 00:28:08.938 iops : min= 778, max= 4900, avg=3891.67, stdev=727.87, samples=36 00:28:08.938 lat (usec) : 50=0.01%, 100=5.52%, 250=94.47%, 500=0.01% 00:28:08.938 lat (msec) : >=2000=0.01% 00:28:08.938 cpu : usr=1.61%, sys=2.44%, ctx=143212, majf=0, minf=107 00:28:08.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.938 issued rwts: total=71524,71680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:08.938 00:28:08.938 Run status group 0 (all jobs): 00:28:08.938 READ: bw=4768KiB/s (4883kB/s), 4768KiB/s-4768KiB/s (4883kB/s-4883kB/s), io=279MiB (293MB), run=60000-60000msec 00:28:08.938 WRITE: bw=4779KiB/s (4893kB/s), 4779KiB/s-4779KiB/s (4893kB/s-4893kB/s), io=280MiB (294MB), run=60000-60000msec 00:28:08.938 00:28:08.938 Disk stats (read/write): 00:28:08.938 nvme0n1: ios=71514/71221, merge=0/0, ticks=7861/7225, in_queue=15086, util=99.79% 00:28:08.938 15:32:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:08.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:08.938 nvmf hotplug test: fio successful as expected 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:08.938 rmmod nvme_rdma 00:28:08.938 rmmod nvme_fabrics 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3178198 ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3178198 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 3178198 ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 3178198 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3178198 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3178198' 00:28:08.938 killing process with pid 3178198 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 3178198 00:28:08.938 15:32:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 3178198 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:10.317 00:28:10.317 real 1m15.014s 00:28:10.317 user 4m31.884s 00:28:10.317 sys 0m8.334s 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.317 ************************************ 00:28:10.317 END TEST nvmf_initiator_timeout 00:28:10.317 ************************************ 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:10.317 ************************************ 00:28:10.317 START TEST nvmf_srq_overwhelm 00:28:10.317 ************************************ 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:10.317 * Looking for test storage... 00:28:10.317 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:10.317 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lcov --version 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.318 --rc genhtml_branch_coverage=1 00:28:10.318 --rc genhtml_function_coverage=1 00:28:10.318 --rc genhtml_legend=1 00:28:10.318 --rc geninfo_all_blocks=1 00:28:10.318 --rc geninfo_unexecuted_blocks=1 00:28:10.318 00:28:10.318 ' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.318 --rc genhtml_branch_coverage=1 00:28:10.318 --rc genhtml_function_coverage=1 00:28:10.318 --rc genhtml_legend=1 00:28:10.318 --rc geninfo_all_blocks=1 00:28:10.318 --rc geninfo_unexecuted_blocks=1 00:28:10.318 00:28:10.318 ' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.318 --rc genhtml_branch_coverage=1 00:28:10.318 --rc genhtml_function_coverage=1 00:28:10.318 --rc genhtml_legend=1 00:28:10.318 --rc geninfo_all_blocks=1 00:28:10.318 --rc geninfo_unexecuted_blocks=1 00:28:10.318 00:28:10.318 ' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.318 --rc genhtml_branch_coverage=1 00:28:10.318 --rc genhtml_function_coverage=1 00:28:10.318 --rc genhtml_legend=1 00:28:10.318 --rc geninfo_all_blocks=1 00:28:10.318 --rc geninfo_unexecuted_blocks=1 00:28:10.318 00:28:10.318 ' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.318 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:10.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.319 15:32:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.944 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:28:16.945 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:28:16.945 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:28:16.945 Found net devices under 0000:18:00.0: mlx_0_0 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:28:16.945 Found net devices under 0000:18:00.1: mlx_0_1 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:16.945 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:17.205 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:17.206 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:17.206 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:28:17.206 altname enp24s0f0np0 00:28:17.206 altname ens785f0np0 00:28:17.206 inet 192.168.100.8/24 scope global mlx_0_0 00:28:17.206 valid_lft forever preferred_lft forever 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:17.206 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:17.206 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:28:17.206 altname enp24s0f1np1 00:28:17.206 altname ens785f1np1 00:28:17.206 inet 192.168.100.9/24 scope global mlx_0_1 00:28:17.206 valid_lft forever preferred_lft forever 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:17.206 192.168.100.9' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:17.206 192.168.100.9' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:17.206 192.168.100.9' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3189804 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3189804 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # '[' -z 3189804 ']' 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:17.206 15:32:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:17.466 [2024-11-06 15:32:44.913623] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:17.466 [2024-11-06 15:32:44.913758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.466 [2024-11-06 15:32:45.064097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.725 [2024-11-06 15:32:45.172274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.725 [2024-11-06 15:32:45.172324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.725 [2024-11-06 15:32:45.172340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.725 [2024-11-06 15:32:45.172353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.725 [2024-11-06 15:32:45.172362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.725 [2024-11-06 15:32:45.174636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.725 [2024-11-06 15:32:45.174688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.725 [2024-11-06 15:32:45.174771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.725 [2024-11-06 15:32:45.174797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@866 -- # return 0 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.293 [2024-11-06 15:32:45.803284] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f44dd525940) succeed. 00:28:18.293 [2024-11-06 15:32:45.812852] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f44dcbbd940) succeed. 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.293 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.552 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.552 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:18.552 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.552 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.552 Malloc0 00:28:18.552 15:32:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:18.552 [2024-11-06 15:32:46.020850] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.552 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:28:19.490 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:28:19.490 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:19.490 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:28:19.490 15:32:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 Malloc1 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.490 15:32:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme1n1 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:20.870 Malloc2 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.870 15:32:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme2n1 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:21.807 Malloc3 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.807 15:32:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme3n1 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.744 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.002 Malloc4 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.003 15:32:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme4n1 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 Malloc5 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.941 15:32:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme5n1 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:28:25.318 15:32:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:28:25.318 [global] 00:28:25.318 thread=1 00:28:25.318 invalidate=1 00:28:25.318 rw=read 00:28:25.318 time_based=1 00:28:25.318 runtime=10 00:28:25.318 ioengine=libaio 00:28:25.318 direct=1 00:28:25.318 bs=1048576 00:28:25.318 iodepth=128 00:28:25.318 norandommap=1 00:28:25.318 numjobs=13 00:28:25.318 00:28:25.318 [job0] 00:28:25.318 filename=/dev/nvme0n1 00:28:25.318 [job1] 00:28:25.318 filename=/dev/nvme1n1 00:28:25.318 [job2] 00:28:25.318 filename=/dev/nvme2n1 00:28:25.318 [job3] 00:28:25.318 filename=/dev/nvme3n1 00:28:25.318 [job4] 00:28:25.318 filename=/dev/nvme4n1 00:28:25.318 [job5] 00:28:25.318 filename=/dev/nvme5n1 00:28:25.318 Could not set queue depth (nvme0n1) 00:28:25.318 Could not set queue depth (nvme1n1) 00:28:25.318 Could not set queue depth (nvme2n1) 00:28:25.318 Could not set queue depth (nvme3n1) 00:28:25.318 Could not set queue depth (nvme4n1) 00:28:25.318 Could not set queue depth (nvme5n1) 00:28:25.578 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:25.578 ... 00:28:25.578 fio-3.35 00:28:25.578 Starting 78 threads 00:28:40.469 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191000: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=1, BW=1882KiB/s (1927kB/s)(26.0MiB/14150msec) 00:28:40.469 slat (usec): min=996, max=2157.3k, avg=464731.08, stdev=856267.52 00:28:40.469 clat (msec): min=2066, max=14148, avg=11873.33, stdev=3763.45 00:28:40.469 lat (msec): min=4156, max=14149, avg=12338.06, stdev=3209.26 00:28:40.469 clat percentiles (msec): 00:28:40.469 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 8490], 00:28:40.469 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:28:40.469 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.469 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.469 | 99.99th=[14160] 00:28:40.469 lat (msec) : >=2000=100.00% 00:28:40.469 cpu : usr=0.00%, sys=0.20%, ctx=39, majf=0, minf=6657 00:28:40.469 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:28:40.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.469 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.469 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191001: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=11, BW=11.3MiB/s (11.8MB/s)(160MiB/14172msec) 00:28:40.469 slat (usec): min=472, max=2133.1k, avg=62532.79, stdev=336548.21 00:28:40.469 clat (msec): min=283, max=13839, avg=10942.26, stdev=4897.15 00:28:40.469 lat (msec): min=286, max=13842, avg=11004.80, stdev=4868.27 00:28:40.469 clat percentiles (msec): 00:28:40.469 | 1.00th=[ 288], 5.00th=[ 393], 10.00th=[ 542], 20.00th=[ 6275], 00:28:40.469 | 30.00th=[13624], 40.00th=[13624], 50.00th=[13624], 60.00th=[13624], 00:28:40.469 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13758], 95.00th=[13758], 00:28:40.469 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:28:40.469 | 99.99th=[13892] 00:28:40.469 bw ( KiB/s): min= 4087, max=38912, per=0.55%, avg=11262.50, stdev=13632.08, samples=6 00:28:40.469 iops : min= 3, max= 38, avg=10.83, stdev=13.42, samples=6 00:28:40.469 lat (msec) : 500=8.75%, 750=3.12%, 2000=2.50%, >=2000=85.62% 00:28:40.469 cpu : usr=0.00%, sys=0.55%, ctx=231, majf=0, minf=32770 00:28:40.469 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:28:40.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.469 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:28:40.469 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191002: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=3, BW=4036KiB/s (4133kB/s)(48.0MiB/12179msec) 00:28:40.469 slat (usec): min=1089, max=2134.3k, avg=209050.46, stdev=607763.01 00:28:40.469 clat (msec): min=2143, max=12176, avg=10297.66, stdev=2992.19 00:28:40.469 lat (msec): min=4258, max=12178, avg=10506.71, stdev=2751.20 00:28:40.469 clat percentiles (msec): 00:28:40.469 | 1.00th=[ 2140], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 6409], 00:28:40.469 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:28:40.469 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:40.469 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.469 | 99.99th=[12147] 00:28:40.469 lat (msec) : >=2000=100.00% 00:28:40.469 cpu : usr=0.01%, sys=0.46%, ctx=61, majf=0, minf=12289 00:28:40.469 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:28:40.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.469 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.469 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191003: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=45, BW=45.7MiB/s (48.0MB/s)(650MiB/14208msec) 00:28:40.469 slat (usec): min=43, max=2199.2k, avg=15406.93, stdev=164016.21 00:28:40.469 clat (msec): min=161, max=13066, avg=2693.73, stdev=4836.88 00:28:40.469 lat (msec): min=162, max=13069, avg=2709.14, stdev=4852.80 00:28:40.469 clat percentiles (msec): 00:28:40.469 | 1.00th=[ 163], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 165], 00:28:40.469 | 30.00th=[ 167], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 171], 00:28:40.469 | 70.00th=[ 284], 80.00th=[ 6342], 90.00th=[12953], 95.00th=[13087], 00:28:40.469 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:28:40.469 | 99.99th=[13087] 00:28:40.469 bw ( KiB/s): min= 1479, max=591872, per=7.46%, avg=152931.14, stdev=250193.79, samples=7 00:28:40.469 iops : min= 1, max= 578, avg=149.14, stdev=244.47, samples=7 00:28:40.469 lat (msec) : 250=66.46%, 500=9.08%, 750=1.54%, >=2000=22.92% 00:28:40.469 cpu : usr=0.02%, sys=0.78%, ctx=707, majf=0, minf=32769 00:28:40.469 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:28:40.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.469 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.469 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191004: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=1, BW=1231KiB/s (1260kB/s)(17.0MiB/14142msec) 00:28:40.469 slat (msec): min=8, max=4240, avg=709.45, stdev=1231.58 00:28:40.469 clat (msec): min=2080, max=14117, avg=9850.84, stdev=4177.49 00:28:40.469 lat (msec): min=4200, max=14141, avg=10560.29, stdev=3780.66 00:28:40.469 clat percentiles (msec): 00:28:40.469 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4212], 20.00th=[ 4245], 00:28:40.469 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:28:40.469 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:28:40.469 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.469 | 99.99th=[14160] 00:28:40.469 lat (msec) : >=2000=100.00% 00:28:40.469 cpu : usr=0.00%, sys=0.13%, ctx=41, majf=0, minf=4353 00:28:40.469 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:40.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.469 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.469 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.469 job0: (groupid=0, jobs=1): err= 0: pid=3191005: Wed Nov 6 15:33:07 2024 00:28:40.469 read: IOPS=30, BW=30.2MiB/s (31.7MB/s)(363MiB/12018msec) 00:28:40.470 slat (usec): min=56, max=2119.9k, avg=32924.79, stdev=222821.71 00:28:40.470 clat (msec): min=64, max=8037, avg=2979.30, stdev=2703.73 00:28:40.470 lat (msec): min=692, max=8530, avg=3012.22, stdev=2717.52 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 693], 5.00th=[ 693], 10.00th=[ 693], 20.00th=[ 693], 00:28:40.470 | 30.00th=[ 709], 40.00th=[ 785], 50.00th=[ 2735], 60.00th=[ 2970], 00:28:40.470 | 70.00th=[ 3171], 80.00th=[ 6409], 90.00th=[ 7953], 95.00th=[ 7953], 00:28:40.470 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:28:40.470 | 99.99th=[ 8020] 00:28:40.470 bw ( KiB/s): min= 9696, max=178176, per=5.86%, avg=120037.75, stdev=75874.83, samples=4 00:28:40.470 iops : min= 9, max= 174, avg=116.75, stdev=74.16, samples=4 00:28:40.470 lat (msec) : 100=0.28%, 750=38.02%, 1000=5.23%, 2000=1.38%, >=2000=55.10% 00:28:40.470 cpu : usr=0.00%, sys=1.18%, ctx=299, majf=0, minf=32769 00:28:40.470 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:40.470 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191006: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=1, BW=1598KiB/s (1637kB/s)(22.0MiB/14095msec) 00:28:40.470 slat (usec): min=1040, max=9671.4k, avg=546069.01, stdev=2086606.41 00:28:40.470 clat (msec): min=2080, max=14092, avg=12167.50, stdev=4115.79 00:28:40.470 lat (msec): min=4185, max=14094, avg=12713.57, stdev=3458.17 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[13892], 00:28:40.470 | 30.00th=[13892], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:28:40.470 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:28:40.470 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.470 | 99.99th=[14160] 00:28:40.470 lat (msec) : >=2000=100.00% 00:28:40.470 cpu : usr=0.00%, sys=0.18%, ctx=14, majf=0, minf=5633 00:28:40.470 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.470 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191007: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=2, BW=2529KiB/s (2589kB/s)(30.0MiB/12149msec) 00:28:40.470 slat (msec): min=2, max=2137, avg=333.63, stdev=738.91 00:28:40.470 clat (msec): min=2139, max=12129, avg=9398.76, stdev=3512.39 00:28:40.470 lat (msec): min=2171, max=12148, avg=9732.39, stdev=3265.82 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:28:40.470 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[11879], 60.00th=[12013], 00:28:40.470 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:28:40.470 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.470 | 99.99th=[12147] 00:28:40.470 lat (msec) : >=2000=100.00% 00:28:40.470 cpu : usr=0.00%, sys=0.28%, ctx=63, majf=0, minf=7681 00:28:40.470 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.470 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191008: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=24, BW=24.3MiB/s (25.5MB/s)(295MiB/12133msec) 00:28:40.470 slat (usec): min=58, max=2041.1k, avg=33918.97, stdev=222429.63 00:28:40.470 clat (msec): min=2125, max=11773, avg=4689.32, stdev=1699.56 00:28:40.470 lat (msec): min=2461, max=11778, avg=4723.24, stdev=1719.86 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2467], 5.00th=[ 2500], 10.00th=[ 3540], 20.00th=[ 3675], 00:28:40.470 | 30.00th=[ 3977], 40.00th=[ 4111], 50.00th=[ 4178], 60.00th=[ 4279], 00:28:40.470 | 70.00th=[ 4329], 80.00th=[ 6141], 90.00th=[ 8154], 95.00th=[ 8221], 00:28:40.470 | 99.00th=[10402], 99.50th=[10671], 99.90th=[11745], 99.95th=[11745], 00:28:40.470 | 99.99th=[11745] 00:28:40.470 bw ( KiB/s): min= 2048, max=149504, per=2.80%, avg=57332.33, stdev=54821.81, samples=6 00:28:40.470 iops : min= 2, max= 146, avg=55.83, stdev=53.61, samples=6 00:28:40.470 lat (msec) : >=2000=100.00% 00:28:40.470 cpu : usr=0.00%, sys=0.97%, ctx=333, majf=0, minf=32769 00:28:40.470 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:40.470 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191009: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=5, BW=6076KiB/s (6221kB/s)(72.0MiB/12135msec) 00:28:40.470 slat (usec): min=964, max=2116.1k, avg=139046.26, stdev=492056.03 00:28:40.470 clat (msec): min=2122, max=12133, avg=9150.50, stdev=3551.95 00:28:40.470 lat (msec): min=2135, max=12134, avg=9289.55, stdev=3467.93 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 4245], 20.00th=[ 4329], 00:28:40.470 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[11879], 60.00th=[12013], 00:28:40.470 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:40.470 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.470 | 99.99th=[12147] 00:28:40.470 lat (msec) : >=2000=100.00% 00:28:40.470 cpu : usr=0.00%, sys=0.63%, ctx=80, majf=0, minf=18433 00:28:40.470 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.470 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191010: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=3, BW=3125KiB/s (3200kB/s)(37.0MiB/12123msec) 00:28:40.470 slat (usec): min=1018, max=2118.0k, avg=270561.92, stdev=677296.99 00:28:40.470 clat (msec): min=2111, max=12118, avg=10227.76, stdev=3046.50 00:28:40.470 lat (msec): min=2136, max=12122, avg=10498.32, stdev=2734.20 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2106], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 8557], 00:28:40.470 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12013], 00:28:40.470 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:28:40.470 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.470 | 99.99th=[12147] 00:28:40.470 lat (msec) : >=2000=100.00% 00:28:40.470 cpu : usr=0.00%, sys=0.35%, ctx=59, majf=0, minf=9473 00:28:40.470 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.470 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191011: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=11, BW=11.2MiB/s (11.8MB/s)(136MiB/12114msec) 00:28:40.470 slat (usec): min=576, max=2137.1k, avg=73682.47, stdev=360362.05 00:28:40.470 clat (msec): min=1493, max=12091, avg=10903.30, stdev=2193.61 00:28:40.470 lat (msec): min=1497, max=12094, avg=10976.98, stdev=2059.24 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 2089], 5.00th=[ 5671], 10.00th=[ 8557], 20.00th=[11342], 00:28:40.470 | 30.00th=[11476], 40.00th=[11476], 50.00th=[11610], 60.00th=[11610], 00:28:40.470 | 70.00th=[11745], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:28:40.470 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.470 | 99.99th=[12147] 00:28:40.470 bw ( KiB/s): min= 1458, max= 6144, per=0.15%, avg=2973.67, stdev=1798.80, samples=6 00:28:40.470 iops : min= 1, max= 6, avg= 2.83, stdev= 1.83, samples=6 00:28:40.470 lat (msec) : 2000=0.74%, >=2000=99.26% 00:28:40.470 cpu : usr=0.00%, sys=1.06%, ctx=121, majf=0, minf=32769 00:28:40.470 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.9%, 16=11.8%, 32=23.5%, >=64=53.7% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=90.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=10.0% 00:28:40.470 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job0: (groupid=0, jobs=1): err= 0: pid=3191012: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=64, BW=64.3MiB/s (67.4MB/s)(781MiB/12147msec) 00:28:40.470 slat (usec): min=43, max=2089.3k, avg=12824.30, stdev=118150.63 00:28:40.470 clat (msec): min=282, max=11933, avg=1790.13, stdev=2497.80 00:28:40.470 lat (msec): min=284, max=11955, avg=1802.96, stdev=2510.92 00:28:40.470 clat percentiles (msec): 00:28:40.470 | 1.00th=[ 284], 5.00th=[ 309], 10.00th=[ 347], 20.00th=[ 405], 00:28:40.470 | 30.00th=[ 430], 40.00th=[ 451], 50.00th=[ 518], 60.00th=[ 659], 00:28:40.470 | 70.00th=[ 1234], 80.00th=[ 1938], 90.00th=[ 7349], 95.00th=[ 7483], 00:28:40.470 | 99.00th=[ 7617], 99.50th=[10671], 99.90th=[11879], 99.95th=[11879], 00:28:40.470 | 99.99th=[11879] 00:28:40.470 bw ( KiB/s): min= 2048, max=374784, per=7.26%, avg=148783.56, stdev=129374.92, samples=9 00:28:40.470 iops : min= 2, max= 366, avg=145.11, stdev=126.43, samples=9 00:28:40.470 lat (msec) : 500=47.25%, 750=12.93%, 2000=22.15%, >=2000=17.67% 00:28:40.470 cpu : usr=0.01%, sys=1.30%, ctx=784, majf=0, minf=32769 00:28:40.470 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:28:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.470 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.470 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.470 job1: (groupid=0, jobs=1): err= 0: pid=3191018: Wed Nov 6 15:33:07 2024 00:28:40.470 read: IOPS=4, BW=4823KiB/s (4939kB/s)(67.0MiB/14225msec) 00:28:40.470 slat (usec): min=827, max=2072.5k, avg=149363.83, stdev=500540.00 00:28:40.471 clat (msec): min=4216, max=14219, avg=11700.52, stdev=3383.09 00:28:40.471 lat (msec): min=4233, max=14223, avg=11849.89, stdev=3266.57 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 4212], 5.00th=[ 4279], 10.00th=[ 6342], 20.00th=[ 8490], 00:28:40.471 | 30.00th=[10671], 40.00th=[12818], 50.00th=[13892], 60.00th=[14026], 00:28:40.471 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.471 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.471 | 99.99th=[14160] 00:28:40.471 lat (msec) : >=2000=100.00% 00:28:40.471 cpu : usr=0.00%, sys=0.50%, ctx=70, majf=0, minf=17153 00:28:40.471 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.471 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191019: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=4, BW=4641KiB/s (4753kB/s)(64.0MiB/14120msec) 00:28:40.471 slat (usec): min=966, max=2066.8k, avg=187673.67, stdev=561222.60 00:28:40.471 clat (msec): min=2108, max=14118, avg=10649.61, stdev=3816.34 00:28:40.471 lat (msec): min=4175, max=14119, avg=10837.28, stdev=3682.63 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:28:40.471 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[13892], 00:28:40.471 | 70.00th=[13892], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.471 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.471 | 99.99th=[14160] 00:28:40.471 lat (msec) : >=2000=100.00% 00:28:40.471 cpu : usr=0.00%, sys=0.47%, ctx=57, majf=0, minf=16385 00:28:40.471 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.471 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191020: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=32, BW=32.7MiB/s (34.3MB/s)(460MiB/14066msec) 00:28:40.471 slat (usec): min=62, max=4414.5k, avg=26068.95, stdev=247478.79 00:28:40.471 clat (msec): min=550, max=11700, avg=3760.04, stdev=4522.94 00:28:40.471 lat (msec): min=555, max=11715, avg=3786.11, stdev=4534.57 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 558], 5.00th=[ 575], 10.00th=[ 592], 20.00th=[ 693], 00:28:40.471 | 30.00th=[ 877], 40.00th=[ 1083], 50.00th=[ 1183], 60.00th=[ 1234], 00:28:40.471 | 70.00th=[ 1284], 80.00th=[10939], 90.00th=[11208], 95.00th=[11476], 00:28:40.471 | 99.00th=[11610], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:28:40.471 | 99.99th=[11745] 00:28:40.471 bw ( KiB/s): min= 2048, max=122880, per=3.69%, avg=75741.00, stdev=54421.25, samples=9 00:28:40.471 iops : min= 2, max= 120, avg=73.78, stdev=53.24, samples=9 00:28:40.471 lat (msec) : 750=23.91%, 1000=12.61%, 2000=34.57%, >=2000=28.91% 00:28:40.471 cpu : usr=0.01%, sys=1.15%, ctx=872, majf=0, minf=32769 00:28:40.471 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.471 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191021: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=6, BW=6453KiB/s (6608kB/s)(76.0MiB/12060msec) 00:28:40.471 slat (usec): min=677, max=2080.3k, avg=157495.20, stdev=523554.95 00:28:40.471 clat (msec): min=89, max=12055, avg=7826.00, stdev=3417.17 00:28:40.471 lat (msec): min=2124, max=12059, avg=7983.49, stdev=3330.58 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 90], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:28:40.471 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:28:40.471 | 70.00th=[10805], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:28:40.471 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:40.471 | 99.99th=[12013] 00:28:40.471 lat (msec) : 100=1.32%, >=2000=98.68% 00:28:40.471 cpu : usr=0.02%, sys=0.62%, ctx=68, majf=0, minf=19457 00:28:40.471 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.471 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191022: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(302MiB/14172msec) 00:28:40.471 slat (usec): min=879, max=2225.7k, avg=33148.65, stdev=214095.52 00:28:40.471 clat (msec): min=1078, max=11693, avg=5583.65, stdev=4669.59 00:28:40.471 lat (msec): min=1088, max=11733, avg=5616.80, stdev=4675.47 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 1083], 5.00th=[ 1150], 10.00th=[ 1234], 20.00th=[ 1401], 00:28:40.471 | 30.00th=[ 1670], 40.00th=[ 1770], 50.00th=[ 1888], 60.00th=[ 8490], 00:28:40.471 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11610], 00:28:40.471 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:28:40.471 | 99.99th=[11745] 00:28:40.471 bw ( KiB/s): min= 1467, max=120832, per=2.18%, avg=44727.38, stdev=52413.20, samples=8 00:28:40.471 iops : min= 1, max= 118, avg=43.62, stdev=51.24, samples=8 00:28:40.471 lat (msec) : 2000=55.30%, >=2000=44.70% 00:28:40.471 cpu : usr=0.00%, sys=0.92%, ctx=865, majf=0, minf=32769 00:28:40.471 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.1% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:40.471 issued rwts: total=302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191023: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=29, BW=29.4MiB/s (30.9MB/s)(414MiB/14063msec) 00:28:40.471 slat (usec): min=117, max=2225.1k, avg=24150.25, stdev=182485.85 00:28:40.471 clat (msec): min=850, max=11427, avg=4122.78, stdev=4219.38 00:28:40.471 lat (msec): min=852, max=11468, avg=4146.93, stdev=4231.36 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 877], 5.00th=[ 927], 10.00th=[ 986], 20.00th=[ 1020], 00:28:40.471 | 30.00th=[ 1099], 40.00th=[ 1200], 50.00th=[ 1234], 60.00th=[ 1267], 00:28:40.471 | 70.00th=[ 7416], 80.00th=[10805], 90.00th=[11073], 95.00th=[11342], 00:28:40.471 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:28:40.471 | 99.99th=[11476] 00:28:40.471 bw ( KiB/s): min= 4096, max=135168, per=3.58%, avg=73358.00, stdev=50158.74, samples=8 00:28:40.471 iops : min= 4, max= 132, avg=71.50, stdev=48.97, samples=8 00:28:40.471 lat (msec) : 1000=14.01%, 2000=47.83%, >=2000=38.16% 00:28:40.471 cpu : usr=0.01%, sys=0.95%, ctx=858, majf=0, minf=32769 00:28:40.471 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.471 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191024: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=2, BW=2599KiB/s (2662kB/s)(36.0MiB/14183msec) 00:28:40.471 slat (usec): min=1003, max=2090.8k, avg=277972.78, stdev=670463.51 00:28:40.471 clat (msec): min=4175, max=14133, avg=10267.55, stdev=4182.15 00:28:40.471 lat (msec): min=4190, max=14182, avg=10545.53, stdev=4097.35 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:28:40.471 | 30.00th=[ 6342], 40.00th=[ 8557], 50.00th=[12818], 60.00th=[14026], 00:28:40.471 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.471 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.471 | 99.99th=[14160] 00:28:40.471 lat (msec) : >=2000=100.00% 00:28:40.471 cpu : usr=0.00%, sys=0.27%, ctx=52, majf=0, minf=9217 00:28:40.471 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.471 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191025: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=85, BW=85.7MiB/s (89.9MB/s)(1040MiB/12134msec) 00:28:40.471 slat (usec): min=46, max=2032.6k, avg=9615.70, stdev=109195.44 00:28:40.471 clat (msec): min=274, max=8869, avg=1413.29, stdev=2465.94 00:28:40.471 lat (msec): min=274, max=8872, avg=1422.90, stdev=2475.65 00:28:40.471 clat percentiles (msec): 00:28:40.471 | 1.00th=[ 279], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 296], 00:28:40.471 | 30.00th=[ 321], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 405], 00:28:40.471 | 70.00th=[ 567], 80.00th=[ 785], 90.00th=[ 4665], 95.00th=[ 8792], 00:28:40.471 | 99.00th=[ 8792], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:28:40.471 | 99.99th=[ 8926] 00:28:40.471 bw ( KiB/s): min= 2048, max=439441, per=9.12%, avg=186894.50, stdev=182801.87, samples=10 00:28:40.471 iops : min= 2, max= 429, avg=182.50, stdev=178.50, samples=10 00:28:40.471 lat (msec) : 500=67.88%, 750=11.54%, 1000=3.94%, >=2000=16.63% 00:28:40.471 cpu : usr=0.01%, sys=1.78%, ctx=855, majf=0, minf=32769 00:28:40.471 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:28:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.471 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.471 issued rwts: total=1040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.471 job1: (groupid=0, jobs=1): err= 0: pid=3191026: Wed Nov 6 15:33:07 2024 00:28:40.471 read: IOPS=29, BW=29.0MiB/s (30.4MB/s)(354MiB/12203msec) 00:28:40.472 slat (usec): min=452, max=2105.2k, avg=28333.46, stdev=190852.84 00:28:40.472 clat (msec): min=780, max=9694, avg=4119.83, stdev=3774.36 00:28:40.472 lat (msec): min=794, max=9696, avg=4148.16, stdev=3779.30 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 785], 5.00th=[ 802], 10.00th=[ 818], 20.00th=[ 953], 00:28:40.472 | 30.00th=[ 1250], 40.00th=[ 1519], 50.00th=[ 1653], 60.00th=[ 1905], 00:28:40.472 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9597], 00:28:40.472 | 99.00th=[ 9597], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:28:40.472 | 99.99th=[ 9731] 00:28:40.472 bw ( KiB/s): min= 1374, max=161792, per=2.83%, avg=58026.13, stdev=65965.72, samples=8 00:28:40.472 iops : min= 1, max= 158, avg=56.50, stdev=64.57, samples=8 00:28:40.472 lat (msec) : 1000=22.88%, 2000=38.42%, >=2000=38.70% 00:28:40.472 cpu : usr=0.01%, sys=1.14%, ctx=776, majf=0, minf=32769 00:28:40.472 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:40.472 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job1: (groupid=0, jobs=1): err= 0: pid=3191027: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=2, BW=2526KiB/s (2586kB/s)(35.0MiB/14190msec) 00:28:40.472 slat (usec): min=1104, max=2127.9k, avg=286450.29, stdev=693251.19 00:28:40.472 clat (msec): min=4163, max=14187, avg=12481.43, stdev=3057.31 00:28:40.472 lat (msec): min=4191, max=14189, avg=12767.88, stdev=2704.41 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 4178], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[10671], 00:28:40.472 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14026], 60.00th=[14160], 00:28:40.472 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.472 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.472 | 99.99th=[14160] 00:28:40.472 lat (msec) : >=2000=100.00% 00:28:40.472 cpu : usr=0.00%, sys=0.27%, ctx=66, majf=0, minf=8961 00:28:40.472 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.472 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job1: (groupid=0, jobs=1): err= 0: pid=3191028: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=7, BW=7346KiB/s (7523kB/s)(87.0MiB/12127msec) 00:28:40.472 slat (usec): min=987, max=2068.6k, avg=114989.11, stdev=441783.42 00:28:40.472 clat (msec): min=2121, max=12124, avg=9754.48, stdev=3307.92 00:28:40.472 lat (msec): min=2132, max=12126, avg=9869.47, stdev=3211.99 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 4245], 20.00th=[ 6409], 00:28:40.472 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[11879], 60.00th=[12013], 00:28:40.472 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:40.472 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.472 | 99.99th=[12147] 00:28:40.472 lat (msec) : >=2000=100.00% 00:28:40.472 cpu : usr=0.00%, sys=0.78%, ctx=108, majf=0, minf=22273 00:28:40.472 IO depths : 1=1.1%, 2=2.3%, 4=4.6%, 8=9.2%, 16=18.4%, 32=36.8%, >=64=27.6% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.472 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job1: (groupid=0, jobs=1): err= 0: pid=3191029: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=133, BW=133MiB/s (140MB/s)(1350MiB/10125msec) 00:28:40.472 slat (usec): min=42, max=2039.3k, avg=7442.97, stdev=71081.54 00:28:40.472 clat (msec): min=68, max=4823, avg=916.57, stdev=1017.64 00:28:40.472 lat (msec): min=184, max=4836, avg=924.01, stdev=1025.46 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 224], 5.00th=[ 338], 10.00th=[ 359], 20.00th=[ 418], 00:28:40.472 | 30.00th=[ 430], 40.00th=[ 447], 50.00th=[ 527], 60.00th=[ 575], 00:28:40.472 | 70.00th=[ 625], 80.00th=[ 718], 90.00th=[ 2299], 95.00th=[ 3876], 00:28:40.472 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3910], 99.95th=[ 4799], 00:28:40.472 | 99.99th=[ 4799] 00:28:40.472 bw ( KiB/s): min=10240, max=337920, per=9.39%, avg=192461.62, stdev=90597.41, samples=13 00:28:40.472 iops : min= 10, max= 330, avg=187.85, stdev=88.51, samples=13 00:28:40.472 lat (msec) : 100=0.07%, 250=1.26%, 500=45.33%, 750=33.93%, 2000=9.19% 00:28:40.472 lat (msec) : >=2000=10.22% 00:28:40.472 cpu : usr=0.13%, sys=2.46%, ctx=1166, majf=0, minf=32769 00:28:40.472 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.472 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job1: (groupid=0, jobs=1): err= 0: pid=3191030: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=34, BW=34.3MiB/s (35.9MB/s)(415MiB/12113msec) 00:28:40.472 slat (usec): min=47, max=2120.8k, avg=28974.87, stdev=201755.55 00:28:40.472 clat (msec): min=85, max=8950, avg=3389.72, stdev=3336.57 00:28:40.472 lat (msec): min=510, max=8957, avg=3418.69, stdev=3339.12 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 510], 5.00th=[ 535], 10.00th=[ 609], 20.00th=[ 827], 00:28:40.472 | 30.00th=[ 936], 40.00th=[ 1250], 50.00th=[ 1485], 60.00th=[ 1787], 00:28:40.472 | 70.00th=[ 4665], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:28:40.472 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:28:40.472 | 99.99th=[ 8926] 00:28:40.472 bw ( KiB/s): min= 4096, max=235520, per=3.58%, avg=73472.00, stdev=84748.87, samples=8 00:28:40.472 iops : min= 4, max= 230, avg=71.75, stdev=82.76, samples=8 00:28:40.472 lat (msec) : 100=0.24%, 750=15.42%, 1000=16.87%, 2000=31.08%, >=2000=36.39% 00:28:40.472 cpu : usr=0.00%, sys=1.12%, ctx=823, majf=0, minf=32769 00:28:40.472 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.472 issued rwts: total=415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job2: (groupid=0, jobs=1): err= 0: pid=3191033: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=4, BW=4148KiB/s (4248kB/s)(49.0MiB/12096msec) 00:28:40.472 slat (usec): min=480, max=2051.9k, avg=244938.43, stdev=629602.82 00:28:40.472 clat (msec): min=93, max=12091, avg=6698.34, stdev=3519.99 00:28:40.472 lat (msec): min=2119, max=12095, avg=6943.28, stdev=3468.02 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 94], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4212], 00:28:40.472 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 6477], 00:28:40.472 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[11879], 95.00th=[12147], 00:28:40.472 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.472 | 99.99th=[12147] 00:28:40.472 lat (msec) : 100=2.04%, >=2000=97.96% 00:28:40.472 cpu : usr=0.00%, sys=0.37%, ctx=52, majf=0, minf=12545 00:28:40.472 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.472 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job2: (groupid=0, jobs=1): err= 0: pid=3191034: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=9, BW=9297KiB/s (9520kB/s)(110MiB/12116msec) 00:28:40.472 slat (usec): min=917, max=2061.3k, avg=90931.55, stdev=394023.68 00:28:40.472 clat (msec): min=2112, max=12113, avg=9346.49, stdev=3236.39 00:28:40.472 lat (msec): min=2122, max=12114, avg=9437.42, stdev=3171.12 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:28:40.472 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:28:40.472 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:28:40.472 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.472 | 99.99th=[12147] 00:28:40.472 lat (msec) : >=2000=100.00% 00:28:40.472 cpu : usr=0.00%, sys=0.96%, ctx=100, majf=0, minf=28161 00:28:40.472 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.472 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job2: (groupid=0, jobs=1): err= 0: pid=3191035: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=6, BW=6228KiB/s (6377kB/s)(86.0MiB/14141msec) 00:28:40.472 slat (usec): min=749, max=2051.6k, avg=140026.26, stdev=479088.84 00:28:40.472 clat (msec): min=2097, max=14136, avg=8987.17, stdev=3066.76 00:28:40.472 lat (msec): min=4149, max=14139, avg=9127.20, stdev=3023.09 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:28:40.472 | 30.00th=[ 8288], 40.00th=[ 8356], 50.00th=[ 8423], 60.00th=[ 8490], 00:28:40.472 | 70.00th=[ 8490], 80.00th=[12818], 90.00th=[14160], 95.00th=[14160], 00:28:40.472 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.472 | 99.99th=[14160] 00:28:40.472 lat (msec) : >=2000=100.00% 00:28:40.472 cpu : usr=0.00%, sys=0.55%, ctx=109, majf=0, minf=22017 00:28:40.472 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:28:40.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.472 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.472 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.472 job2: (groupid=0, jobs=1): err= 0: pid=3191036: Wed Nov 6 15:33:07 2024 00:28:40.472 read: IOPS=4, BW=4222KiB/s (4323kB/s)(58.0MiB/14067msec) 00:28:40.472 slat (usec): min=950, max=2080.3k, avg=206393.61, stdev=595502.62 00:28:40.472 clat (msec): min=2094, max=14064, avg=9887.22, stdev=3722.11 00:28:40.472 lat (msec): min=4139, max=14065, avg=10093.61, stdev=3612.74 00:28:40.472 clat percentiles (msec): 00:28:40.472 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 6342], 00:28:40.473 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:28:40.473 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:28:40.473 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.473 | 99.99th=[14026] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.01%, sys=0.41%, ctx=53, majf=0, minf=14849 00:28:40.473 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.473 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191037: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=3, BW=3671KiB/s (3759kB/s)(51.0MiB/14227msec) 00:28:40.473 slat (usec): min=854, max=2116.4k, avg=196742.89, stdev=580753.19 00:28:40.473 clat (msec): min=4191, max=14224, avg=12956.57, stdev=2559.48 00:28:40.473 lat (msec): min=6297, max=14225, avg=13153.32, stdev=2237.70 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[12818], 00:28:40.473 | 30.00th=[14026], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:28:40.473 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.473 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.473 | 99.99th=[14160] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.41%, ctx=83, majf=0, minf=13057 00:28:40.473 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.473 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191038: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=2, BW=2286KiB/s (2340kB/s)(27.0MiB/12097msec) 00:28:40.473 slat (usec): min=726, max=2079.1k, avg=444591.67, stdev=812214.22 00:28:40.473 clat (msec): min=92, max=11985, avg=6676.08, stdev=3727.30 00:28:40.473 lat (msec): min=2114, max=12096, avg=7120.68, stdev=3626.38 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 93], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:28:40.473 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 8557], 00:28:40.473 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[11879], 95.00th=[12013], 00:28:40.473 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:40.473 | 99.99th=[12013] 00:28:40.473 lat (msec) : 100=3.70%, >=2000=96.30% 00:28:40.473 cpu : usr=0.00%, sys=0.22%, ctx=60, majf=0, minf=6913 00:28:40.473 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.473 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191039: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=1, BW=1673KiB/s (1713kB/s)(23.0MiB/14078msec) 00:28:40.473 slat (msec): min=14, max=2067, avg=520.12, stdev=861.91 00:28:40.473 clat (msec): min=2115, max=12860, avg=7598.34, stdev=3414.79 00:28:40.473 lat (msec): min=4172, max=14077, avg=8118.46, stdev=3452.49 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:28:40.473 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:28:40.473 | 70.00th=[10671], 80.00th=[10805], 90.00th=[12818], 95.00th=[12818], 00:28:40.473 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:28:40.473 | 99.99th=[12818] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.16%, ctx=42, majf=0, minf=5889 00:28:40.473 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.473 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191040: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=1, BW=1093KiB/s (1120kB/s)(15.0MiB/14047msec) 00:28:40.473 slat (msec): min=10, max=2120, avg=797.17, stdev=995.43 00:28:40.473 clat (msec): min=2089, max=14017, avg=9659.38, stdev=3963.24 00:28:40.473 lat (msec): min=4166, max=14046, avg=10456.56, stdev=3508.23 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4178], 20.00th=[ 6275], 00:28:40.473 | 30.00th=[ 6342], 40.00th=[ 8423], 50.00th=[10671], 60.00th=[10671], 00:28:40.473 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:28:40.473 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.473 | 99.99th=[14026] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.11%, ctx=57, majf=0, minf=3841 00:28:40.473 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191041: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=1, BW=1884KiB/s (1929kB/s)(26.0MiB/14130msec) 00:28:40.473 slat (msec): min=2, max=2095, avg=463.07, stdev=835.05 00:28:40.473 clat (msec): min=2089, max=14111, avg=10447.61, stdev=3716.56 00:28:40.473 lat (msec): min=4184, max=14129, avg=10910.68, stdev=3367.10 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:28:40.473 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:28:40.473 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14160], 00:28:40.473 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.473 | 99.99th=[14160] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.20%, ctx=66, majf=0, minf=6657 00:28:40.473 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.473 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191042: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=3, BW=3968KiB/s (4063kB/s)(47.0MiB/12129msec) 00:28:40.473 slat (usec): min=897, max=2104.0k, avg=212952.64, stdev=598721.62 00:28:40.473 clat (msec): min=2119, max=12122, avg=9089.84, stdev=3430.80 00:28:40.473 lat (msec): min=2135, max=12128, avg=9302.79, stdev=3296.76 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6409], 00:28:40.473 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10805], 60.00th=[12013], 00:28:40.473 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:40.473 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.473 | 99.99th=[12147] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.44%, ctx=89, majf=0, minf=12033 00:28:40.473 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.473 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191043: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=3, BW=4033KiB/s (4129kB/s)(56.0MiB/14220msec) 00:28:40.473 slat (usec): min=1088, max=2096.6k, avg=178568.41, stdev=548937.21 00:28:40.473 clat (msec): min=4218, max=14217, avg=12584.65, stdev=2903.72 00:28:40.473 lat (msec): min=4265, max=14218, avg=12763.22, stdev=2678.66 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 4212], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[10671], 00:28:40.473 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:28:40.473 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.473 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.473 | 99.99th=[14160] 00:28:40.473 lat (msec) : >=2000=100.00% 00:28:40.473 cpu : usr=0.00%, sys=0.46%, ctx=96, majf=0, minf=14337 00:28:40.473 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.473 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.473 job2: (groupid=0, jobs=1): err= 0: pid=3191044: Wed Nov 6 15:33:07 2024 00:28:40.473 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(520MiB/12111msec) 00:28:40.473 slat (usec): min=49, max=2119.7k, avg=23104.92, stdev=174943.87 00:28:40.473 clat (msec): min=93, max=6485, avg=2580.07, stdev=2036.69 00:28:40.473 lat (msec): min=286, max=7834, avg=2603.17, stdev=2045.27 00:28:40.473 clat percentiles (msec): 00:28:40.473 | 1.00th=[ 284], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 292], 00:28:40.473 | 30.00th=[ 1804], 40.00th=[ 2056], 50.00th=[ 2232], 60.00th=[ 2366], 00:28:40.473 | 70.00th=[ 2467], 80.00th=[ 5738], 90.00th=[ 5805], 95.00th=[ 5873], 00:28:40.473 | 99.00th=[ 5940], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:28:40.473 | 99.99th=[ 6477] 00:28:40.473 bw ( KiB/s): min= 6144, max=380928, per=6.53%, avg=133802.67, stdev=142159.33, samples=6 00:28:40.473 iops : min= 6, max= 372, avg=130.67, stdev=138.83, samples=6 00:28:40.473 lat (msec) : 100=0.19%, 500=28.85%, 2000=9.81%, >=2000=61.15% 00:28:40.473 cpu : usr=0.01%, sys=1.27%, ctx=453, majf=0, minf=32769 00:28:40.473 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:28:40.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.473 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.474 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job2: (groupid=0, jobs=1): err= 0: pid=3191045: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=3, BW=3485KiB/s (3568kB/s)(48.0MiB/14105msec) 00:28:40.474 slat (usec): min=856, max=2087.1k, avg=250330.03, stdev=648709.57 00:28:40.474 clat (msec): min=2088, max=14100, avg=9673.66, stdev=3670.46 00:28:40.474 lat (msec): min=4151, max=14104, avg=9923.99, stdev=3549.90 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:28:40.474 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:28:40.474 | 70.00th=[12818], 80.00th=[13892], 90.00th=[14026], 95.00th=[14160], 00:28:40.474 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.474 | 99.99th=[14160] 00:28:40.474 lat (msec) : >=2000=100.00% 00:28:40.474 cpu : usr=0.01%, sys=0.35%, ctx=47, majf=0, minf=12289 00:28:40.474 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.474 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191046: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=30, BW=30.5MiB/s (32.0MB/s)(429MiB/14070msec) 00:28:40.474 slat (usec): min=47, max=2058.7k, avg=27932.65, stdev=205764.71 00:28:40.474 clat (msec): min=430, max=8288, avg=2310.52, stdev=2399.94 00:28:40.474 lat (msec): min=432, max=8290, avg=2338.45, stdev=2418.00 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 435], 5.00th=[ 439], 10.00th=[ 460], 20.00th=[ 489], 00:28:40.474 | 30.00th=[ 514], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 617], 00:28:40.474 | 70.00th=[ 4732], 80.00th=[ 4866], 90.00th=[ 5134], 95.00th=[ 7013], 00:28:40.474 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288], 00:28:40.474 | 99.99th=[ 8288] 00:28:40.474 bw ( KiB/s): min= 2052, max=303104, per=6.03%, avg=123538.80, stdev=127211.76, samples=5 00:28:40.474 iops : min= 2, max= 296, avg=120.20, stdev=124.35, samples=5 00:28:40.474 lat (msec) : 500=23.78%, 750=37.76%, 2000=0.47%, >=2000=38.00% 00:28:40.474 cpu : usr=0.01%, sys=0.90%, ctx=470, majf=0, minf=32769 00:28:40.474 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.474 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191047: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=2, BW=2477KiB/s (2537kB/s)(34.0MiB/14055msec) 00:28:40.474 slat (usec): min=996, max=2095.8k, avg=351780.69, stdev=748398.35 00:28:40.474 clat (msec): min=2094, max=13971, avg=9343.64, stdev=3130.51 00:28:40.474 lat (msec): min=4152, max=14054, avg=9695.42, stdev=2958.48 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4212], 20.00th=[ 6342], 00:28:40.474 | 30.00th=[ 8423], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:28:40.474 | 70.00th=[10671], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:28:40.474 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.474 | 99.99th=[14026] 00:28:40.474 lat (msec) : >=2000=100.00% 00:28:40.474 cpu : usr=0.00%, sys=0.23%, ctx=49, majf=0, minf=8705 00:28:40.474 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.474 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191048: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(154MiB/12226msec) 00:28:40.474 slat (usec): min=176, max=2049.9k, avg=65201.23, stdev=320060.69 00:28:40.474 clat (msec): min=2184, max=12173, avg=8429.79, stdev=2483.67 00:28:40.474 lat (msec): min=2249, max=12175, avg=8494.99, stdev=2449.31 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 2265], 5.00th=[ 4044], 10.00th=[ 4044], 20.00th=[ 6544], 00:28:40.474 | 30.00th=[ 8154], 40.00th=[ 8288], 50.00th=[ 8423], 60.00th=[ 8490], 00:28:40.474 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[12147], 95.00th=[12147], 00:28:40.474 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.474 | 99.99th=[12147] 00:28:40.474 bw ( KiB/s): min= 2011, max=30720, per=0.45%, avg=9206.17, stdev=11129.22, samples=6 00:28:40.474 iops : min= 1, max= 30, avg= 8.50, stdev=11.15, samples=6 00:28:40.474 lat (msec) : >=2000=100.00% 00:28:40.474 cpu : usr=0.01%, sys=1.09%, ctx=157, majf=0, minf=32487 00:28:40.474 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.4%, 32=20.8%, >=64=59.1% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=96.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.6% 00:28:40.474 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191049: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=2, BW=3058KiB/s (3131kB/s)(42.0MiB/14064msec) 00:28:40.474 slat (usec): min=1007, max=2072.5k, avg=284956.37, stdev=675681.08 00:28:40.474 clat (msec): min=2095, max=14059, avg=10633.25, stdev=3752.20 00:28:40.474 lat (msec): min=4144, max=14063, avg=10918.21, stdev=3536.22 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:28:40.474 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12818], 00:28:40.474 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:28:40.474 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.474 | 99.99th=[14026] 00:28:40.474 lat (msec) : >=2000=100.00% 00:28:40.474 cpu : usr=0.01%, sys=0.28%, ctx=65, majf=0, minf=10753 00:28:40.474 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.474 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191050: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=60, BW=60.6MiB/s (63.6MB/s)(725MiB/11962msec) 00:28:40.474 slat (usec): min=43, max=2088.0k, avg=13803.45, stdev=138122.10 00:28:40.474 clat (msec): min=193, max=8162, avg=820.29, stdev=1090.71 00:28:40.474 lat (msec): min=202, max=8165, avg=834.10, stdev=1124.23 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 207], 5.00th=[ 218], 10.00th=[ 241], 20.00th=[ 284], 00:28:40.474 | 30.00th=[ 317], 40.00th=[ 376], 50.00th=[ 422], 60.00th=[ 435], 00:28:40.474 | 70.00th=[ 472], 80.00th=[ 575], 90.00th=[ 2265], 95.00th=[ 2366], 00:28:40.474 | 99.00th=[ 6946], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:28:40.474 | 99.99th=[ 8154] 00:28:40.474 bw ( KiB/s): min=241664, max=387072, per=14.49%, avg=296978.75, stdev=66598.21, samples=4 00:28:40.474 iops : min= 236, max= 378, avg=290.00, stdev=65.05, samples=4 00:28:40.474 lat (msec) : 250=13.66%, 500=60.41%, 750=6.07%, 2000=0.83%, >=2000=19.03% 00:28:40.474 cpu : usr=0.03%, sys=1.35%, ctx=813, majf=0, minf=32769 00:28:40.474 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.474 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.474 issued rwts: total=725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.474 job3: (groupid=0, jobs=1): err= 0: pid=3191051: Wed Nov 6 15:33:07 2024 00:28:40.474 read: IOPS=4, BW=5052KiB/s (5173kB/s)(70.0MiB/14189msec) 00:28:40.474 slat (usec): min=935, max=2077.3k, avg=144311.29, stdev=492051.60 00:28:40.474 clat (msec): min=4086, max=14187, avg=10359.73, stdev=4216.04 00:28:40.474 lat (msec): min=4197, max=14188, avg=10504.04, stdev=4170.85 00:28:40.474 clat percentiles (msec): 00:28:40.474 | 1.00th=[ 4077], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4212], 00:28:40.474 | 30.00th=[ 6342], 40.00th=[ 8557], 50.00th=[12818], 60.00th=[14026], 00:28:40.474 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.474 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.474 | 99.99th=[14160] 00:28:40.474 lat (msec) : >=2000=100.00% 00:28:40.474 cpu : usr=0.01%, sys=0.53%, ctx=96, majf=0, minf=17921 00:28:40.474 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:28:40.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.475 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191052: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=4, BW=4884KiB/s (5001kB/s)(67.0MiB/14047msec) 00:28:40.475 slat (usec): min=876, max=2036.2k, avg=178272.14, stdev=540579.09 00:28:40.475 clat (msec): min=2101, max=14043, avg=9727.64, stdev=3722.60 00:28:40.475 lat (msec): min=4137, max=14046, avg=9905.91, stdev=3636.88 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:28:40.475 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:28:40.475 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:28:40.475 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.475 | 99.99th=[14026] 00:28:40.475 lat (msec) : >=2000=100.00% 00:28:40.475 cpu : usr=0.00%, sys=0.46%, ctx=61, majf=0, minf=17153 00:28:40.475 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.475 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191053: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=1, BW=1898KiB/s (1943kB/s)(26.0MiB/14028msec) 00:28:40.475 slat (usec): min=1095, max=2090.9k, avg=459091.74, stdev=821322.38 00:28:40.475 clat (msec): min=2091, max=14026, avg=9970.86, stdev=3774.58 00:28:40.475 lat (msec): min=4159, max=14027, avg=10429.95, stdev=3493.30 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 6342], 00:28:40.475 | 30.00th=[ 8423], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:28:40.475 | 70.00th=[12818], 80.00th=[13892], 90.00th=[14026], 95.00th=[14026], 00:28:40.475 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.475 | 99.99th=[14026] 00:28:40.475 lat (msec) : >=2000=100.00% 00:28:40.475 cpu : usr=0.00%, sys=0.19%, ctx=59, majf=0, minf=6657 00:28:40.475 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.475 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191054: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=31, BW=31.3MiB/s (32.8MB/s)(441MiB/14109msec) 00:28:40.475 slat (usec): min=38, max=2062.8k, avg=27226.21, stdev=202148.23 00:28:40.475 clat (msec): min=295, max=8212, avg=2232.38, stdev=2491.25 00:28:40.475 lat (msec): min=297, max=8215, avg=2259.60, stdev=2508.99 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 296], 5.00th=[ 317], 10.00th=[ 330], 20.00th=[ 380], 00:28:40.475 | 30.00th=[ 418], 40.00th=[ 435], 50.00th=[ 447], 60.00th=[ 460], 00:28:40.475 | 70.00th=[ 4933], 80.00th=[ 5134], 90.00th=[ 5269], 95.00th=[ 6879], 00:28:40.475 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:28:40.475 | 99.99th=[ 8221] 00:28:40.475 bw ( KiB/s): min= 2052, max=292864, per=6.27%, avg=128615.20, stdev=129884.86, samples=5 00:28:40.475 iops : min= 2, max= 286, avg=125.60, stdev=126.84, samples=5 00:28:40.475 lat (msec) : 500=61.45%, 750=0.91%, 2000=0.68%, >=2000=36.96% 00:28:40.475 cpu : usr=0.00%, sys=0.84%, ctx=535, majf=0, minf=32769 00:28:40.475 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.475 issued rwts: total=441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191055: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=4, BW=5115KiB/s (5238kB/s)(60.0MiB/12011msec) 00:28:40.475 slat (usec): min=712, max=2088.0k, avg=198358.93, stdev=570832.11 00:28:40.475 clat (msec): min=108, max=10790, avg=4706.44, stdev=2047.66 00:28:40.475 lat (msec): min=2092, max=12010, avg=4904.80, stdev=2167.66 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 109], 5.00th=[ 2089], 10.00th=[ 3977], 20.00th=[ 4010], 00:28:40.475 | 30.00th=[ 4044], 40.00th=[ 4077], 50.00th=[ 4144], 60.00th=[ 4178], 00:28:40.475 | 70.00th=[ 4212], 80.00th=[ 4329], 90.00th=[ 8557], 95.00th=[ 8658], 00:28:40.475 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:28:40.475 | 99.99th=[10805] 00:28:40.475 lat (msec) : 250=1.67%, >=2000=98.33% 00:28:40.475 cpu : usr=0.01%, sys=0.43%, ctx=144, majf=0, minf=15361 00:28:40.475 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.475 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191056: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=3, BW=3323KiB/s (3402kB/s)(46.0MiB/14177msec) 00:28:40.475 slat (usec): min=1035, max=2057.2k, avg=217495.83, stdev=587279.01 00:28:40.475 clat (msec): min=4171, max=14173, avg=11874.47, stdev=3222.94 00:28:40.475 lat (msec): min=4267, max=14176, avg=12091.97, stdev=3022.93 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 4178], 5.00th=[ 6275], 10.00th=[ 6342], 20.00th=[ 8490], 00:28:40.475 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:28:40.475 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:28:40.475 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.475 | 99.99th=[14160] 00:28:40.475 lat (msec) : >=2000=100.00% 00:28:40.475 cpu : usr=0.00%, sys=0.36%, ctx=73, majf=0, minf=11777 00:28:40.475 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.475 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191057: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=2, BW=2321KiB/s (2377kB/s)(32.0MiB/14116msec) 00:28:40.475 slat (usec): min=1473, max=2080.3k, avg=375379.13, stdev=755294.55 00:28:40.475 clat (msec): min=2103, max=14112, avg=9483.56, stdev=3814.32 00:28:40.475 lat (msec): min=4154, max=14115, avg=9858.94, stdev=3652.24 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 2106], 5.00th=[ 4144], 10.00th=[ 4212], 20.00th=[ 6342], 00:28:40.475 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:28:40.475 | 70.00th=[12818], 80.00th=[13892], 90.00th=[14026], 95.00th=[14160], 00:28:40.475 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:28:40.475 | 99.99th=[14160] 00:28:40.475 lat (msec) : >=2000=100.00% 00:28:40.475 cpu : usr=0.00%, sys=0.25%, ctx=64, majf=0, minf=8193 00:28:40.475 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:40.475 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job3: (groupid=0, jobs=1): err= 0: pid=3191058: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=4, BW=4737KiB/s (4850kB/s)(65.0MiB/14052msec) 00:28:40.475 slat (usec): min=939, max=2066.4k, avg=183802.58, stdev=545938.91 00:28:40.475 clat (msec): min=2103, max=14049, avg=8745.39, stdev=3540.93 00:28:40.475 lat (msec): min=4170, max=14051, avg=8929.19, stdev=3500.64 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 6074], 20.00th=[ 6141], 00:28:40.475 | 30.00th=[ 6208], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[ 8557], 00:28:40.475 | 70.00th=[10671], 80.00th=[13892], 90.00th=[14026], 95.00th=[14026], 00:28:40.475 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:28:40.475 | 99.99th=[14026] 00:28:40.475 lat (msec) : >=2000=100.00% 00:28:40.475 cpu : usr=0.03%, sys=0.44%, ctx=123, majf=0, minf=16641 00:28:40.475 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.475 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job4: (groupid=0, jobs=1): err= 0: pid=3191059: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=18, BW=19.0MiB/s (19.9MB/s)(229MiB/12082msec) 00:28:40.475 slat (usec): min=82, max=2072.2k, avg=43952.63, stdev=260121.85 00:28:40.475 clat (msec): min=552, max=11926, avg=5940.13, stdev=3951.71 00:28:40.475 lat (msec): min=555, max=12047, avg=5984.08, stdev=3959.32 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 558], 5.00th=[ 575], 10.00th=[ 634], 20.00th=[ 1905], 00:28:40.475 | 30.00th=[ 2140], 40.00th=[ 4044], 50.00th=[ 6409], 60.00th=[ 8490], 00:28:40.475 | 70.00th=[10000], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:28:40.475 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:28:40.475 | 99.99th=[11879] 00:28:40.475 bw ( KiB/s): min= 1499, max=77824, per=1.27%, avg=26043.38, stdev=24106.60, samples=8 00:28:40.475 iops : min= 1, max= 76, avg=25.38, stdev=23.61, samples=8 00:28:40.475 lat (msec) : 750=17.90%, 2000=11.35%, >=2000=70.74% 00:28:40.475 cpu : usr=0.00%, sys=0.83%, ctx=148, majf=0, minf=32769 00:28:40.475 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.5% 00:28:40.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.475 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:28:40.475 issued rwts: total=229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.475 job4: (groupid=0, jobs=1): err= 0: pid=3191060: Wed Nov 6 15:33:07 2024 00:28:40.475 read: IOPS=1, BW=1186KiB/s (1215kB/s)(14.0MiB/12086msec) 00:28:40.475 slat (msec): min=13, max=4285, avg=855.14, stdev=1317.76 00:28:40.475 clat (msec): min=113, max=10854, avg=3926.85, stdev=2700.40 00:28:40.475 lat (msec): min=2146, max=12085, avg=4781.99, stdev=3241.30 00:28:40.475 clat percentiles (msec): 00:28:40.475 | 1.00th=[ 114], 5.00th=[ 114], 10.00th=[ 2140], 20.00th=[ 2165], 00:28:40.476 | 30.00th=[ 2232], 40.00th=[ 2265], 50.00th=[ 2299], 60.00th=[ 4396], 00:28:40.476 | 70.00th=[ 4396], 80.00th=[ 6544], 90.00th=[ 6544], 95.00th=[10805], 00:28:40.476 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:28:40.476 | 99.99th=[10805] 00:28:40.476 lat (msec) : 250=7.14%, >=2000=92.86% 00:28:40.476 cpu : usr=0.01%, sys=0.12%, ctx=57, majf=0, minf=3585 00:28:40.476 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191061: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=43, BW=43.4MiB/s (45.5MB/s)(435MiB/10026msec) 00:28:40.476 slat (usec): min=115, max=2045.8k, avg=22977.49, stdev=149001.65 00:28:40.476 clat (msec): min=25, max=3584, avg=1759.69, stdev=1189.09 00:28:40.476 lat (msec): min=39, max=4816, avg=1782.67, stdev=1200.95 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 45], 5.00th=[ 236], 10.00th=[ 380], 20.00th=[ 760], 00:28:40.476 | 30.00th=[ 1183], 40.00th=[ 1250], 50.00th=[ 1351], 60.00th=[ 1418], 00:28:40.476 | 70.00th=[ 3339], 80.00th=[ 3406], 90.00th=[ 3473], 95.00th=[ 3540], 00:28:40.476 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:28:40.476 | 99.99th=[ 3574] 00:28:40.476 bw ( KiB/s): min=14336, max=131541, per=3.84%, avg=78628.62, stdev=41054.05, samples=8 00:28:40.476 iops : min= 14, max= 128, avg=76.62, stdev=39.98, samples=8 00:28:40.476 lat (msec) : 50=1.38%, 100=2.07%, 250=1.61%, 500=9.20%, 750=5.52% 00:28:40.476 lat (msec) : 1000=5.52%, 2000=43.91%, >=2000=30.80% 00:28:40.476 cpu : usr=0.07%, sys=2.07%, ctx=758, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.476 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191062: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=87, BW=87.2MiB/s (91.4MB/s)(884MiB/10142msec) 00:28:40.476 slat (usec): min=70, max=2039.2k, avg=11407.79, stdev=103127.58 00:28:40.476 clat (msec): min=53, max=4125, avg=1186.55, stdev=1109.20 00:28:40.476 lat (msec): min=153, max=4127, avg=1197.96, stdev=1114.48 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 192], 5.00th=[ 330], 10.00th=[ 451], 20.00th=[ 575], 00:28:40.476 | 30.00th=[ 600], 40.00th=[ 634], 50.00th=[ 693], 60.00th=[ 718], 00:28:40.476 | 70.00th=[ 751], 80.00th=[ 2769], 90.00th=[ 2903], 95.00th=[ 4010], 00:28:40.476 | 99.00th=[ 4111], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:28:40.476 | 99.99th=[ 4111] 00:28:40.476 bw ( KiB/s): min= 6131, max=243712, per=7.55%, avg=154808.20, stdev=79338.44, samples=10 00:28:40.476 iops : min= 5, max= 238, avg=151.00, stdev=77.75, samples=10 00:28:40.476 lat (msec) : 100=0.11%, 250=2.38%, 500=8.60%, 750=58.71%, 1000=7.92% 00:28:40.476 lat (msec) : >=2000=22.29% 00:28:40.476 cpu : usr=0.03%, sys=2.26%, ctx=779, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.476 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191063: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=45, BW=45.7MiB/s (47.9MB/s)(463MiB/10141msec) 00:28:40.476 slat (usec): min=153, max=2045.8k, avg=21776.82, stdev=144495.14 00:28:40.476 clat (msec): min=53, max=4736, avg=2067.87, stdev=1387.75 00:28:40.476 lat (msec): min=155, max=4738, avg=2089.65, stdev=1392.50 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 169], 5.00th=[ 330], 10.00th=[ 485], 20.00th=[ 1028], 00:28:40.476 | 30.00th=[ 1150], 40.00th=[ 1284], 50.00th=[ 1368], 60.00th=[ 1519], 00:28:40.476 | 70.00th=[ 3406], 80.00th=[ 3507], 90.00th=[ 3641], 95.00th=[ 4665], 00:28:40.476 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:28:40.476 | 99.99th=[ 4732] 00:28:40.476 bw ( KiB/s): min=24576, max=122880, per=4.18%, avg=85747.62, stdev=34204.89, samples=8 00:28:40.476 iops : min= 24, max= 120, avg=83.62, stdev=33.54, samples=8 00:28:40.476 lat (msec) : 100=0.22%, 250=2.38%, 500=7.78%, 750=4.75%, 1000=4.32% 00:28:40.476 lat (msec) : 2000=42.55%, >=2000=38.01% 00:28:40.476 cpu : usr=0.08%, sys=2.16%, ctx=799, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.476 issued rwts: total=463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191064: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=43, BW=43.9MiB/s (46.0MB/s)(532MiB/12121msec) 00:28:40.476 slat (usec): min=43, max=2072.2k, avg=22561.17, stdev=186833.35 00:28:40.476 clat (msec): min=114, max=5817, avg=2002.05, stdev=2364.07 00:28:40.476 lat (msec): min=283, max=5820, avg=2024.62, stdev=2371.70 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 284], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 292], 00:28:40.476 | 30.00th=[ 296], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 317], 00:28:40.476 | 70.00th=[ 4396], 80.00th=[ 5470], 90.00th=[ 5604], 95.00th=[ 5671], 00:28:40.476 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:28:40.476 | 99.99th=[ 5805] 00:28:40.476 bw ( KiB/s): min= 6144, max=430080, per=8.07%, avg=165478.40, stdev=172915.98, samples=5 00:28:40.476 iops : min= 6, max= 420, avg=161.60, stdev=168.86, samples=5 00:28:40.476 lat (msec) : 250=0.19%, 500=63.53%, 2000=0.56%, >=2000=35.71% 00:28:40.476 cpu : usr=0.07%, sys=1.29%, ctx=455, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.476 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191065: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(251MiB/12088msec) 00:28:40.476 slat (usec): min=83, max=2053.9k, avg=47700.42, stdev=283236.33 00:28:40.476 clat (msec): min=113, max=11349, avg=5918.42, stdev=4393.17 00:28:40.476 lat (msec): min=620, max=11402, avg=5966.12, stdev=4389.22 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 617], 5.00th=[ 625], 10.00th=[ 625], 20.00th=[ 676], 00:28:40.476 | 30.00th=[ 793], 40.00th=[ 4279], 50.00th=[ 6477], 60.00th=[ 7215], 00:28:40.476 | 70.00th=[11073], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:28:40.476 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:28:40.476 | 99.99th=[11342] 00:28:40.476 bw ( KiB/s): min= 6144, max=118784, per=2.04%, avg=41907.00, stdev=42200.03, samples=6 00:28:40.476 iops : min= 6, max= 116, avg=40.67, stdev=41.29, samples=6 00:28:40.476 lat (msec) : 250=0.40%, 750=28.69%, 1000=1.59%, >=2000=69.32% 00:28:40.476 cpu : usr=0.01%, sys=1.24%, ctx=185, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:28:40.476 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191066: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(149MiB/12119msec) 00:28:40.476 slat (usec): min=791, max=2152.7k, avg=67170.22, stdev=335306.61 00:28:40.476 clat (msec): min=1357, max=12064, avg=9744.53, stdev=2773.16 00:28:40.476 lat (msec): min=1359, max=12067, avg=9811.70, stdev=2706.69 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 1368], 5.00th=[ 3574], 10.00th=[ 4279], 20.00th=[ 7819], 00:28:40.476 | 30.00th=[10805], 40.00th=[10939], 50.00th=[11073], 60.00th=[11073], 00:28:40.476 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11745], 95.00th=[12013], 00:28:40.476 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:40.476 | 99.99th=[12013] 00:28:40.476 bw ( KiB/s): min= 4096, max=14336, per=0.44%, avg=9002.00, stdev=4709.15, samples=5 00:28:40.476 iops : min= 4, max= 14, avg= 8.40, stdev= 4.39, samples=5 00:28:40.476 lat (msec) : 2000=1.34%, >=2000=98.66% 00:28:40.476 cpu : usr=0.00%, sys=1.01%, ctx=361, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.4%, 16=10.7%, 32=21.5%, >=64=57.7% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=95.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.3% 00:28:40.476 issued rwts: total=149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191067: Wed Nov 6 15:33:07 2024 00:28:40.476 read: IOPS=256, BW=256MiB/s (269MB/s)(2595MiB/10130msec) 00:28:40.476 slat (usec): min=42, max=2017.0k, avg=3850.99, stdev=56102.85 00:28:40.476 clat (msec): min=124, max=4705, avg=479.23, stdev=907.99 00:28:40.476 lat (msec): min=131, max=4709, avg=483.08, stdev=911.56 00:28:40.476 clat percentiles (msec): 00:28:40.476 | 1.00th=[ 134], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 136], 00:28:40.476 | 30.00th=[ 144], 40.00th=[ 182], 50.00th=[ 228], 60.00th=[ 284], 00:28:40.476 | 70.00th=[ 305], 80.00th=[ 443], 90.00th=[ 634], 95.00th=[ 2333], 00:28:40.476 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:28:40.476 | 99.99th=[ 4732] 00:28:40.476 bw ( KiB/s): min=18395, max=935936, per=18.96%, avg=388659.77, stdev=299799.56, samples=13 00:28:40.476 iops : min= 17, max= 914, avg=379.31, stdev=293.01, samples=13 00:28:40.476 lat (msec) : 250=57.07%, 500=26.55%, 750=9.94%, 1000=0.81%, >=2000=5.63% 00:28:40.476 cpu : usr=0.12%, sys=3.36%, ctx=2315, majf=0, minf=32769 00:28:40.476 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:28:40.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.476 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.476 job4: (groupid=0, jobs=1): err= 0: pid=3191068: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=5, BW=5788KiB/s (5927kB/s)(68.0MiB/12030msec) 00:28:40.477 slat (usec): min=555, max=2059.9k, avg=174791.80, stdev=545445.82 00:28:40.477 clat (msec): min=143, max=12026, avg=7292.04, stdev=3478.62 00:28:40.477 lat (msec): min=2141, max=12029, avg=7466.83, stdev=3412.05 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 144], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:28:40.477 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:28:40.477 | 70.00th=[10671], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:28:40.477 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:28:40.477 | 99.99th=[12013] 00:28:40.477 lat (msec) : 250=1.47%, >=2000=98.53% 00:28:40.477 cpu : usr=0.00%, sys=0.57%, ctx=61, majf=0, minf=17409 00:28:40.477 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.477 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job4: (groupid=0, jobs=1): err= 0: pid=3191069: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=19, BW=19.8MiB/s (20.8MB/s)(240MiB/12128msec) 00:28:40.477 slat (usec): min=162, max=2152.6k, avg=49911.40, stdev=294061.22 00:28:40.477 clat (msec): min=147, max=11229, avg=6096.32, stdev=4622.74 00:28:40.477 lat (msec): min=502, max=11232, avg=6146.23, stdev=4614.69 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 498], 5.00th=[ 506], 10.00th=[ 567], 20.00th=[ 667], 00:28:40.477 | 30.00th=[ 818], 40.00th=[ 2702], 50.00th=[ 6544], 60.00th=[ 9060], 00:28:40.477 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11073], 95.00th=[11208], 00:28:40.477 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:28:40.477 | 99.99th=[11208] 00:28:40.477 bw ( KiB/s): min= 4096, max=139264, per=1.60%, avg=32768.00, stdev=47340.85, samples=7 00:28:40.477 iops : min= 4, max= 136, avg=32.00, stdev=46.23, samples=7 00:28:40.477 lat (msec) : 250=0.42%, 500=2.08%, 750=24.58%, 1000=6.25%, >=2000=66.67% 00:28:40.477 cpu : usr=0.00%, sys=1.10%, ctx=330, majf=0, minf=32769 00:28:40.477 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:28:40.477 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job4: (groupid=0, jobs=1): err= 0: pid=3191070: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=3, BW=3887KiB/s (3980kB/s)(46.0MiB/12119msec) 00:28:40.477 slat (usec): min=964, max=2044.7k, avg=260893.17, stdev=643658.83 00:28:40.477 clat (msec): min=116, max=12115, avg=8460.29, stdev=3761.50 00:28:40.477 lat (msec): min=2161, max=12118, avg=8721.18, stdev=3581.85 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 117], 5.00th=[ 2232], 10.00th=[ 2265], 20.00th=[ 4396], 00:28:40.477 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[10805], 00:28:40.477 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:28:40.477 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.477 | 99.99th=[12147] 00:28:40.477 lat (msec) : 250=2.17%, >=2000=97.83% 00:28:40.477 cpu : usr=0.00%, sys=0.39%, ctx=67, majf=0, minf=11777 00:28:40.477 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:40.477 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job4: (groupid=0, jobs=1): err= 0: pid=3191071: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(123MiB/12095msec) 00:28:40.477 slat (usec): min=639, max=2073.8k, avg=81307.97, stdev=368461.69 00:28:40.477 clat (msec): min=2093, max=12093, avg=9539.46, stdev=2969.88 00:28:40.477 lat (msec): min=4167, max=12094, avg=9620.77, stdev=2900.45 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 4178], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6477], 00:28:40.477 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[11745], 60.00th=[11879], 00:28:40.477 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:28:40.477 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.477 | 99.99th=[12147] 00:28:40.477 lat (msec) : >=2000=100.00% 00:28:40.477 cpu : usr=0.00%, sys=1.03%, ctx=79, majf=0, minf=31489 00:28:40.477 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.5%, 16=13.0%, 32=26.0%, >=64=48.8% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.477 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job5: (groupid=0, jobs=1): err= 0: pid=3191072: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=8, BW=8977KiB/s (9193kB/s)(106MiB/12091msec) 00:28:40.477 slat (usec): min=509, max=2054.6k, avg=112629.97, stdev=434549.35 00:28:40.477 clat (msec): min=151, max=12087, avg=7960.77, stdev=3672.07 00:28:40.477 lat (msec): min=2153, max=12090, avg=8073.40, stdev=3612.90 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 2165], 5.00th=[ 2232], 10.00th=[ 2299], 20.00th=[ 4396], 00:28:40.477 | 30.00th=[ 4463], 40.00th=[ 6611], 50.00th=[ 8658], 60.00th=[10805], 00:28:40.477 | 70.00th=[10939], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:28:40.477 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.477 | 99.99th=[12147] 00:28:40.477 lat (msec) : 250=0.94%, >=2000=99.06% 00:28:40.477 cpu : usr=0.00%, sys=0.86%, ctx=88, majf=0, minf=27137 00:28:40.477 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.477 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job5: (groupid=0, jobs=1): err= 0: pid=3191073: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=171, BW=172MiB/s (180MB/s)(1741MiB/10131msec) 00:28:40.477 slat (usec): min=44, max=2122.5k, avg=5740.38, stdev=85705.03 00:28:40.477 clat (msec): min=128, max=4816, avg=715.61, stdev=1231.29 00:28:40.477 lat (msec): min=130, max=4822, avg=721.35, stdev=1236.00 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 138], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 140], 00:28:40.477 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 163], 60.00th=[ 243], 00:28:40.477 | 70.00th=[ 296], 80.00th=[ 609], 90.00th=[ 2433], 95.00th=[ 4665], 00:28:40.477 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:40.477 | 99.99th=[ 4799] 00:28:40.477 bw ( KiB/s): min= 4096, max=937984, per=17.91%, avg=367220.56, stdev=338941.39, samples=9 00:28:40.477 iops : min= 4, max= 916, avg=358.44, stdev=331.15, samples=9 00:28:40.477 lat (msec) : 250=60.65%, 500=14.30%, 750=9.48%, >=2000=15.57% 00:28:40.477 cpu : usr=0.09%, sys=2.62%, ctx=1580, majf=0, minf=32769 00:28:40.477 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.477 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job5: (groupid=0, jobs=1): err= 0: pid=3191074: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=88, BW=89.0MiB/s (93.3MB/s)(1076MiB/12094msec) 00:28:40.477 slat (usec): min=46, max=2041.8k, avg=11096.94, stdev=99637.35 00:28:40.477 clat (msec): min=147, max=3727, avg=1124.22, stdev=884.85 00:28:40.477 lat (msec): min=324, max=3730, avg=1135.32, stdev=889.01 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 355], 5.00th=[ 384], 10.00th=[ 388], 20.00th=[ 422], 00:28:40.477 | 30.00th=[ 435], 40.00th=[ 460], 50.00th=[ 542], 60.00th=[ 1116], 00:28:40.477 | 70.00th=[ 1318], 80.00th=[ 2299], 90.00th=[ 2467], 95.00th=[ 2567], 00:28:40.477 | 99.00th=[ 3574], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3742], 00:28:40.477 | 99.99th=[ 3742] 00:28:40.477 bw ( KiB/s): min=32768, max=321536, per=9.46%, avg=193969.90, stdev=98957.29, samples=10 00:28:40.477 iops : min= 32, max= 314, avg=189.40, stdev=96.64, samples=10 00:28:40.477 lat (msec) : 250=0.09%, 500=47.03%, 750=8.64%, 1000=3.53%, 2000=12.27% 00:28:40.477 lat (msec) : >=2000=28.44% 00:28:40.477 cpu : usr=0.02%, sys=1.72%, ctx=1119, majf=0, minf=32769 00:28:40.477 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.477 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job5: (groupid=0, jobs=1): err= 0: pid=3191075: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=109, BW=110MiB/s (115MB/s)(1325MiB/12079msec) 00:28:40.477 slat (usec): min=46, max=1982.8k, avg=7587.65, stdev=63092.73 00:28:40.477 clat (msec): min=249, max=2935, avg=1003.68, stdev=805.01 00:28:40.477 lat (msec): min=249, max=2935, avg=1011.26, stdev=807.16 00:28:40.477 clat percentiles (msec): 00:28:40.477 | 1.00th=[ 251], 5.00th=[ 368], 10.00th=[ 393], 20.00th=[ 481], 00:28:40.477 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 667], 60.00th=[ 718], 00:28:40.477 | 70.00th=[ 802], 80.00th=[ 2106], 90.00th=[ 2467], 95.00th=[ 2802], 00:28:40.477 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:40.477 | 99.99th=[ 2937] 00:28:40.477 bw ( KiB/s): min= 1396, max=342016, per=8.55%, avg=175191.14, stdev=98554.25, samples=14 00:28:40.477 iops : min= 1, max= 334, avg=171.00, stdev=96.35, samples=14 00:28:40.477 lat (msec) : 250=1.06%, 500=20.23%, 750=46.04%, 1000=11.85%, >=2000=20.83% 00:28:40.477 cpu : usr=0.03%, sys=1.71%, ctx=1300, majf=0, minf=32769 00:28:40.477 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:28:40.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.477 issued rwts: total=1325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.477 job5: (groupid=0, jobs=1): err= 0: pid=3191076: Wed Nov 6 15:33:07 2024 00:28:40.477 read: IOPS=226, BW=226MiB/s (237MB/s)(2298MiB/10149msec) 00:28:40.477 slat (usec): min=54, max=2026.6k, avg=4381.68, stdev=48545.53 00:28:40.477 clat (msec): min=68, max=3784, avg=425.42, stdev=456.71 00:28:40.478 lat (msec): min=133, max=3786, avg=429.81, stdev=462.11 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 134], 5.00th=[ 136], 10.00th=[ 186], 20.00th=[ 284], 00:28:40.478 | 30.00th=[ 288], 40.00th=[ 305], 50.00th=[ 363], 60.00th=[ 409], 00:28:40.478 | 70.00th=[ 439], 80.00th=[ 464], 90.00th=[ 558], 95.00th=[ 617], 00:28:40.478 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3775], 99.95th=[ 3775], 00:28:40.478 | 99.99th=[ 3775] 00:28:40.478 bw ( KiB/s): min=209314, max=729088, per=16.68%, avg=341846.08, stdev=133550.56, samples=13 00:28:40.478 iops : min= 204, max= 712, avg=333.62, stdev=130.56, samples=13 00:28:40.478 lat (msec) : 100=0.04%, 250=17.19%, 500=68.54%, 750=12.14%, >=2000=2.09% 00:28:40.478 cpu : usr=0.11%, sys=2.62%, ctx=2051, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.478 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191077: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=77, BW=77.0MiB/s (80.8MB/s)(772MiB/10022msec) 00:28:40.478 slat (usec): min=50, max=2114.3k, avg=12949.74, stdev=128499.85 00:28:40.478 clat (msec): min=19, max=5485, avg=1429.48, stdev=1647.50 00:28:40.478 lat (msec): min=82, max=5518, avg=1442.43, stdev=1653.75 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 102], 5.00th=[ 234], 10.00th=[ 236], 20.00th=[ 262], 00:28:40.478 | 30.00th=[ 300], 40.00th=[ 481], 50.00th=[ 609], 60.00th=[ 651], 00:28:40.478 | 70.00th=[ 2467], 80.00th=[ 2601], 90.00th=[ 4866], 95.00th=[ 5134], 00:28:40.478 | 99.00th=[ 5403], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:28:40.478 | 99.99th=[ 5470] 00:28:40.478 bw ( KiB/s): min=63488, max=507904, per=12.38%, avg=253871.00, stdev=161798.22, samples=5 00:28:40.478 iops : min= 62, max= 496, avg=247.80, stdev=158.05, samples=5 00:28:40.478 lat (msec) : 20=0.13%, 100=0.65%, 250=15.93%, 500=23.83%, 750=27.59% 00:28:40.478 lat (msec) : >=2000=31.87% 00:28:40.478 cpu : usr=0.04%, sys=1.60%, ctx=1401, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.478 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191078: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=78, BW=78.1MiB/s (81.9MB/s)(782MiB/10018msec) 00:28:40.478 slat (usec): min=43, max=1996.3k, avg=12784.99, stdev=129204.80 00:28:40.478 clat (msec): min=17, max=7900, avg=733.64, stdev=1427.54 00:28:40.478 lat (msec): min=18, max=7901, avg=746.42, stdev=1449.82 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 25], 5.00th=[ 144], 10.00th=[ 153], 20.00th=[ 213], 00:28:40.478 | 30.00th=[ 262], 40.00th=[ 305], 50.00th=[ 355], 60.00th=[ 439], 00:28:40.478 | 70.00th=[ 558], 80.00th=[ 567], 90.00th=[ 726], 95.00th=[ 4463], 00:28:40.478 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:28:40.478 | 99.99th=[ 7886] 00:28:40.478 bw ( KiB/s): min=163840, max=434176, per=13.09%, avg=268288.00, stdev=125614.38, samples=5 00:28:40.478 iops : min= 160, max= 424, avg=262.00, stdev=122.67, samples=5 00:28:40.478 lat (msec) : 20=0.38%, 50=1.41%, 250=26.47%, 500=37.34%, 750=25.96% 00:28:40.478 lat (msec) : 1000=1.28%, >=2000=7.16% 00:28:40.478 cpu : usr=0.05%, sys=1.52%, ctx=662, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:40.478 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191079: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=26, BW=26.2MiB/s (27.4MB/s)(314MiB/11998msec) 00:28:40.478 slat (usec): min=46, max=2004.2k, avg=31870.77, stdev=207035.11 00:28:40.478 clat (msec): min=463, max=10664, avg=4239.19, stdev=3815.71 00:28:40.478 lat (msec): min=470, max=10685, avg=4271.06, stdev=3820.52 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 472], 5.00th=[ 485], 10.00th=[ 514], 20.00th=[ 567], 00:28:40.478 | 30.00th=[ 659], 40.00th=[ 894], 50.00th=[ 2165], 60.00th=[ 5403], 00:28:40.478 | 70.00th=[ 7684], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9597], 00:28:40.478 | 99.00th=[ 9731], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:28:40.478 | 99.99th=[10671] 00:28:40.478 bw ( KiB/s): min= 2048, max=202752, per=2.67%, avg=54710.86, stdev=72337.36, samples=7 00:28:40.478 iops : min= 2, max= 198, avg=53.43, stdev=70.64, samples=7 00:28:40.478 lat (msec) : 500=7.32%, 750=27.39%, 1000=6.37%, 2000=1.59%, >=2000=57.32% 00:28:40.478 cpu : usr=0.01%, sys=0.90%, ctx=397, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=79.9% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:40.478 issued rwts: total=314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191080: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=111, BW=112MiB/s (117MB/s)(1121MiB/10029msec) 00:28:40.478 slat (usec): min=40, max=2141.0k, avg=8925.41, stdev=93095.39 00:28:40.478 clat (msec): min=17, max=3162, avg=959.06, stdev=978.33 00:28:40.478 lat (msec): min=33, max=3165, avg=967.99, stdev=981.06 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 43], 5.00th=[ 255], 10.00th=[ 288], 20.00th=[ 305], 00:28:40.478 | 30.00th=[ 334], 40.00th=[ 409], 50.00th=[ 443], 60.00th=[ 527], 00:28:40.478 | 70.00th=[ 735], 80.00th=[ 2433], 90.00th=[ 2702], 95.00th=[ 2970], 00:28:40.478 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3171], 99.95th=[ 3171], 00:28:40.478 | 99.99th=[ 3171] 00:28:40.478 bw ( KiB/s): min=49152, max=432128, per=11.02%, avg=225927.67, stdev=128131.57, samples=9 00:28:40.478 iops : min= 48, max= 422, avg=220.56, stdev=125.17, samples=9 00:28:40.478 lat (msec) : 20=0.09%, 50=1.34%, 250=2.32%, 500=54.59%, 750=12.67% 00:28:40.478 lat (msec) : 1000=5.08%, 2000=0.54%, >=2000=23.37% 00:28:40.478 cpu : usr=0.00%, sys=2.11%, ctx=1603, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.478 issued rwts: total=1121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191081: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(422MiB/11988msec) 00:28:40.478 slat (usec): min=48, max=2003.3k, avg=23696.99, stdev=173888.32 00:28:40.478 clat (msec): min=522, max=10634, avg=2718.38, stdev=2667.63 00:28:40.478 lat (msec): min=522, max=10681, avg=2742.07, stdev=2687.17 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 527], 5.00th=[ 567], 10.00th=[ 567], 20.00th=[ 567], 00:28:40.478 | 30.00th=[ 592], 40.00th=[ 676], 50.00th=[ 785], 60.00th=[ 2869], 00:28:40.478 | 70.00th=[ 4111], 80.00th=[ 6678], 90.00th=[ 6879], 95.00th=[ 7013], 00:28:40.478 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[10671], 99.95th=[10671], 00:28:40.478 | 99.99th=[10671] 00:28:40.478 bw ( KiB/s): min=10240, max=221184, per=4.91%, avg=100693.33, stdev=89114.28, samples=6 00:28:40.478 iops : min= 10, max= 216, avg=98.33, stdev=87.03, samples=6 00:28:40.478 lat (msec) : 750=48.58%, 1000=8.06%, 2000=0.47%, >=2000=42.89% 00:28:40.478 cpu : usr=0.01%, sys=0.95%, ctx=386, majf=0, minf=32769 00:28:40.478 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:28:40.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.478 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:40.478 issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.478 job5: (groupid=0, jobs=1): err= 0: pid=3191082: Wed Nov 6 15:33:07 2024 00:28:40.478 read: IOPS=9, BW=9565KiB/s (9794kB/s)(113MiB/12098msec) 00:28:40.478 slat (usec): min=805, max=2026.3k, avg=88500.52, stdev=365503.06 00:28:40.478 clat (msec): min=2096, max=12095, avg=7085.81, stdev=3661.29 00:28:40.478 lat (msec): min=2112, max=12097, avg=7174.31, stdev=3660.49 00:28:40.478 clat percentiles (msec): 00:28:40.478 | 1.00th=[ 2106], 5.00th=[ 3775], 10.00th=[ 3876], 20.00th=[ 3977], 00:28:40.478 | 30.00th=[ 4044], 40.00th=[ 4144], 50.00th=[ 6342], 60.00th=[ 6477], 00:28:40.478 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:28:40.478 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:28:40.478 | 99.99th=[12147] 00:28:40.478 lat (msec) : >=2000=100.00% 00:28:40.478 cpu : usr=0.00%, sys=0.98%, ctx=146, majf=0, minf=28929 00:28:40.478 IO depths : 1=0.9%, 2=1.8%, 4=3.5%, 8=7.1%, 16=14.2%, 32=28.3%, >=64=44.2% 00:28:40.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.479 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:40.479 issued rwts: total=113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.479 job5: (groupid=0, jobs=1): err= 0: pid=3191083: Wed Nov 6 15:33:07 2024 00:28:40.479 read: IOPS=76, BW=76.5MiB/s (80.2MB/s)(921MiB/12041msec) 00:28:40.479 slat (usec): min=48, max=2097.9k, avg=12903.05, stdev=133660.96 00:28:40.479 clat (msec): min=152, max=6724, avg=1537.23, stdev=1992.21 00:28:40.479 lat (msec): min=194, max=6725, avg=1550.13, stdev=1999.05 00:28:40.479 clat percentiles (msec): 00:28:40.479 | 1.00th=[ 213], 5.00th=[ 241], 10.00th=[ 257], 20.00th=[ 275], 00:28:40.479 | 30.00th=[ 296], 40.00th=[ 372], 50.00th=[ 414], 60.00th=[ 443], 00:28:40.479 | 70.00th=[ 2123], 80.00th=[ 3306], 90.00th=[ 4530], 95.00th=[ 6611], 00:28:40.479 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6745], 99.95th=[ 6745], 00:28:40.479 | 99.99th=[ 6745] 00:28:40.479 bw ( KiB/s): min= 4096, max=436224, per=9.90%, avg=203010.00, stdev=191180.54, samples=8 00:28:40.479 iops : min= 4, max= 426, avg=198.25, stdev=186.70, samples=8 00:28:40.479 lat (msec) : 250=8.25%, 500=59.39%, 750=1.19%, >=2000=31.16% 00:28:40.479 cpu : usr=0.05%, sys=1.68%, ctx=706, majf=0, minf=32769 00:28:40.479 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:28:40.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.479 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.479 issued rwts: total=921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.479 job5: (groupid=0, jobs=1): err= 0: pid=3191084: Wed Nov 6 15:33:07 2024 00:28:40.479 read: IOPS=81, BW=81.1MiB/s (85.1MB/s)(818MiB/10081msec) 00:28:40.479 slat (usec): min=47, max=2145.5k, avg=12275.47, stdev=129008.52 00:28:40.479 clat (msec): min=33, max=5423, avg=1476.62, stdev=1745.71 00:28:40.479 lat (msec): min=157, max=5426, avg=1488.90, stdev=1750.95 00:28:40.479 clat percentiles (msec): 00:28:40.479 | 1.00th=[ 174], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 292], 00:28:40.479 | 30.00th=[ 296], 40.00th=[ 351], 50.00th=[ 518], 60.00th=[ 592], 00:28:40.479 | 70.00th=[ 2400], 80.00th=[ 2534], 90.00th=[ 5067], 95.00th=[ 5269], 00:28:40.479 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:28:40.479 | 99.99th=[ 5403] 00:28:40.479 bw ( KiB/s): min= 8192, max=444416, per=8.62%, avg=176640.00, stdev=154115.04, samples=8 00:28:40.479 iops : min= 8, max= 434, avg=172.50, stdev=150.50, samples=8 00:28:40.479 lat (msec) : 50=0.12%, 250=1.47%, 500=45.35%, 750=18.83%, 2000=2.69% 00:28:40.479 lat (msec) : >=2000=31.54% 00:28:40.479 cpu : usr=0.05%, sys=1.79%, ctx=982, majf=0, minf=32769 00:28:40.479 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:28:40.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:40.479 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:40.479 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:40.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:40.479 00:28:40.479 Run status group 0 (all jobs): 00:28:40.479 READ: bw=2002MiB/s (2099MB/s), 1093KiB/s-256MiB/s (1120kB/s-269MB/s), io=27.8GiB (29.9GB), run=10018-14227msec 00:28:40.479 00:28:40.479 Disk stats (read/write): 00:28:40.479 nvme0n1: ios=21011/0, merge=0/0, ticks=11224610/0, in_queue=11224610, util=98.46% 00:28:40.479 nvme1n1: ios=37350/0, merge=0/0, ticks=11995588/0, in_queue=11995588, util=98.59% 00:28:40.479 nvme2n1: ios=8728/0, merge=0/0, ticks=10625095/0, in_queue=10625095, util=98.90% 00:28:40.479 nvme3n1: ios=17398/0, merge=0/0, ticks=9721833/0, in_queue=9721833, util=98.92% 00:28:40.479 nvme4n1: ios=47577/0, merge=0/0, ticks=10747027/0, in_queue=10747027, util=99.11% 00:28:40.479 nvme5n1: ios=93998/0, merge=0/0, ticks=12178403/0, in_queue=12178403, util=99.20% 00:28:40.738 15:33:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:28:40.738 15:33:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:28:40.738 15:33:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:40.738 15:33:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:28:41.675 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000000 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000000 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:41.675 15:33:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:42.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:42.613 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:28:42.613 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:42.613 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:42.613 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000001 00:28:42.613 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000001 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:42.872 15:33:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:43.810 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000002 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000002 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:43.810 15:33:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:44.748 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000003 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000003 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:44.748 15:33:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:45.686 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000004 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000004 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:45.686 15:33:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:46.624 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:46.624 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:28:46.624 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:28:46.624 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:46.624 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000005 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000005 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:46.884 rmmod nvme_rdma 00:28:46.884 rmmod nvme_fabrics 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3189804 ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3189804 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' -z 3189804 ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # kill -0 3189804 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # uname 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3189804 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3189804' 00:28:46.884 killing process with pid 3189804 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@971 -- # kill 3189804 00:28:46.884 15:33:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@976 -- # wait 3189804 00:28:49.423 15:33:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.423 15:33:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:49.423 00:28:49.423 real 0m39.187s 00:28:49.423 user 2m9.102s 00:28:49.423 sys 0m17.027s 00:28:49.423 15:33:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:49.423 15:33:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:49.423 ************************************ 00:28:49.423 END TEST nvmf_srq_overwhelm 00:28:49.423 ************************************ 00:28:49.423 15:33:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:49.424 15:33:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:49.424 15:33:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:49.424 15:33:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:49.424 ************************************ 00:28:49.424 START TEST nvmf_shutdown 00:28:49.424 ************************************ 00:28:49.424 15:33:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:49.424 * Looking for test storage... 00:28:49.424 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:49.424 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:49.424 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:49.424 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.684 --rc genhtml_branch_coverage=1 00:28:49.684 --rc genhtml_function_coverage=1 00:28:49.684 --rc genhtml_legend=1 00:28:49.684 --rc geninfo_all_blocks=1 00:28:49.684 --rc geninfo_unexecuted_blocks=1 00:28:49.684 00:28:49.684 ' 00:28:49.684 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.684 --rc genhtml_branch_coverage=1 00:28:49.684 --rc genhtml_function_coverage=1 00:28:49.684 --rc genhtml_legend=1 00:28:49.684 --rc geninfo_all_blocks=1 00:28:49.684 --rc geninfo_unexecuted_blocks=1 00:28:49.684 00:28:49.684 ' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.685 --rc genhtml_branch_coverage=1 00:28:49.685 --rc genhtml_function_coverage=1 00:28:49.685 --rc genhtml_legend=1 00:28:49.685 --rc geninfo_all_blocks=1 00:28:49.685 --rc geninfo_unexecuted_blocks=1 00:28:49.685 00:28:49.685 ' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.685 --rc genhtml_branch_coverage=1 00:28:49.685 --rc genhtml_function_coverage=1 00:28:49.685 --rc genhtml_legend=1 00:28:49.685 --rc geninfo_all_blocks=1 00:28:49.685 --rc geninfo_unexecuted_blocks=1 00:28:49.685 00:28:49.685 ' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.685 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.685 ************************************ 00:28:49.685 START TEST nvmf_shutdown_tc1 00:28:49.685 ************************************ 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.685 15:33:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:56.269 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:28:56.531 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:28:56.531 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:28:56.531 Found net devices under 0000:18:00.0: mlx_0_0 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:28:56.531 Found net devices under 0000:18:00.1: mlx_0_1 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:56.531 15:33:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.531 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:56.532 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:56.532 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:28:56.532 altname enp24s0f0np0 00:28:56.532 altname ens785f0np0 00:28:56.532 inet 192.168.100.8/24 scope global mlx_0_0 00:28:56.532 valid_lft forever preferred_lft forever 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:56.532 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:56.532 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:28:56.532 altname enp24s0f1np1 00:28:56.532 altname ens785f1np1 00:28:56.532 inet 192.168.100.9/24 scope global mlx_0_1 00:28:56.532 valid_lft forever preferred_lft forever 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:56.532 192.168.100.9' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:56.532 192.168.100.9' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:56.532 192.168.100.9' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:56.532 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3197465 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3197465 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3197465 ']' 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:56.792 15:33:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.792 [2024-11-06 15:33:24.274804] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:56.792 [2024-11-06 15:33:24.274930] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.051 [2024-11-06 15:33:24.428359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.051 [2024-11-06 15:33:24.537850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.051 [2024-11-06 15:33:24.537909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.051 [2024-11-06 15:33:24.537922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.051 [2024-11-06 15:33:24.537935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.051 [2024-11-06 15:33:24.537945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.051 [2024-11-06 15:33:24.540402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.051 [2024-11-06 15:33:24.540487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.051 [2024-11-06 15:33:24.540548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.051 [2024-11-06 15:33:24.540575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.620 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.620 [2024-11-06 15:33:25.227059] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f16d5bbd940) succeed. 00:28:57.620 [2024-11-06 15:33:25.236796] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f16d5b79940) succeed. 00:28:57.879 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.879 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:57.879 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:57.879 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.879 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.141 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.141 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.141 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.142 15:33:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.142 Malloc1 00:28:58.142 [2024-11-06 15:33:25.675039] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:58.142 Malloc2 00:28:58.503 Malloc3 00:28:58.503 Malloc4 00:28:58.503 Malloc5 00:28:58.837 Malloc6 00:28:58.837 Malloc7 00:28:58.837 Malloc8 00:28:58.837 Malloc9 00:28:59.096 Malloc10 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3197874 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3197874 /var/tmp/bdevperf.sock 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3197874 ']' 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:59.096 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.097 { 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme$subsystem", 00:28:59.097 "trtype": "$TEST_TRANSPORT", 00:28:59.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "$NVMF_PORT", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.097 "hdgst": ${hdgst:-false}, 00:28:59.097 "ddgst": ${ddgst:-false} 00:28:59.097 }, 00:28:59.097 "method": "bdev_nvme_attach_controller" 00:28:59.097 } 00:28:59.097 EOF 00:28:59.097 )") 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:59.097 [2024-11-06 15:33:26.689539] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:59.097 [2024-11-06 15:33:26.689642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:59.097 15:33:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.097 "params": { 00:28:59.097 "name": "Nvme1", 00:28:59.097 "trtype": "rdma", 00:28:59.097 "traddr": "192.168.100.8", 00:28:59.097 "adrfam": "ipv4", 00:28:59.097 "trsvcid": "4420", 00:28:59.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.097 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme2", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme3", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme4", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme5", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme6", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme7", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme8", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme9", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 },{ 00:28:59.098 "params": { 00:28:59.098 "name": "Nvme10", 00:28:59.098 "trtype": "rdma", 00:28:59.098 "traddr": "192.168.100.8", 00:28:59.098 "adrfam": "ipv4", 00:28:59.098 "trsvcid": "4420", 00:28:59.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:59.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:59.098 "hdgst": false, 00:28:59.098 "ddgst": false 00:28:59.098 }, 00:28:59.098 "method": "bdev_nvme_attach_controller" 00:28:59.098 }' 00:28:59.357 [2024-11-06 15:33:26.839746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.357 [2024-11-06 15:33:26.953328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3197874 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:00.737 15:33:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:01.676 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3197874 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3197465 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.676 { 00:29:01.676 "params": { 00:29:01.676 "name": "Nvme$subsystem", 00:29:01.676 "trtype": "$TEST_TRANSPORT", 00:29:01.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.676 "adrfam": "ipv4", 00:29:01.676 "trsvcid": "$NVMF_PORT", 00:29:01.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.676 "hdgst": ${hdgst:-false}, 00:29:01.676 "ddgst": ${ddgst:-false} 00:29:01.676 }, 00:29:01.676 "method": "bdev_nvme_attach_controller" 00:29:01.676 } 00:29:01.676 EOF 00:29:01.676 )") 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.676 { 00:29:01.676 "params": { 00:29:01.676 "name": "Nvme$subsystem", 00:29:01.676 "trtype": "$TEST_TRANSPORT", 00:29:01.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.676 "adrfam": "ipv4", 00:29:01.676 "trsvcid": "$NVMF_PORT", 00:29:01.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.676 "hdgst": ${hdgst:-false}, 00:29:01.676 "ddgst": ${ddgst:-false} 00:29:01.676 }, 00:29:01.676 "method": "bdev_nvme_attach_controller" 00:29:01.676 } 00:29:01.676 EOF 00:29:01.676 )") 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.676 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.676 { 00:29:01.676 "params": { 00:29:01.676 "name": "Nvme$subsystem", 00:29:01.676 "trtype": "$TEST_TRANSPORT", 00:29:01.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.676 "adrfam": "ipv4", 00:29:01.676 "trsvcid": "$NVMF_PORT", 00:29:01.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.676 "hdgst": ${hdgst:-false}, 00:29:01.676 "ddgst": ${ddgst:-false} 00:29:01.676 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.677 { 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme$subsystem", 00:29:01.677 "trtype": "$TEST_TRANSPORT", 00:29:01.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "$NVMF_PORT", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.677 "hdgst": ${hdgst:-false}, 00:29:01.677 "ddgst": ${ddgst:-false} 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 } 00:29:01.677 EOF 00:29:01.677 )") 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.677 [2024-11-06 15:33:29.139805] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:01.677 [2024-11-06 15:33:29.139902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198113 ] 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:01.677 15:33:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme1", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 },{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme2", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 },{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme3", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 },{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme4", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 },{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme5", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.677 }, 00:29:01.677 "method": "bdev_nvme_attach_controller" 00:29:01.677 },{ 00:29:01.677 "params": { 00:29:01.677 "name": "Nvme6", 00:29:01.677 "trtype": "rdma", 00:29:01.677 "traddr": "192.168.100.8", 00:29:01.677 "adrfam": "ipv4", 00:29:01.677 "trsvcid": "4420", 00:29:01.677 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.677 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.677 "hdgst": false, 00:29:01.677 "ddgst": false 00:29:01.678 }, 00:29:01.678 "method": "bdev_nvme_attach_controller" 00:29:01.678 },{ 00:29:01.678 "params": { 00:29:01.678 "name": "Nvme7", 00:29:01.678 "trtype": "rdma", 00:29:01.678 "traddr": "192.168.100.8", 00:29:01.678 "adrfam": "ipv4", 00:29:01.678 "trsvcid": "4420", 00:29:01.678 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.678 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.678 "hdgst": false, 00:29:01.678 "ddgst": false 00:29:01.678 }, 00:29:01.678 "method": "bdev_nvme_attach_controller" 00:29:01.678 },{ 00:29:01.678 "params": { 00:29:01.678 "name": "Nvme8", 00:29:01.678 "trtype": "rdma", 00:29:01.678 "traddr": "192.168.100.8", 00:29:01.678 "adrfam": "ipv4", 00:29:01.678 "trsvcid": "4420", 00:29:01.678 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.678 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.678 "hdgst": false, 00:29:01.678 "ddgst": false 00:29:01.678 }, 00:29:01.678 "method": "bdev_nvme_attach_controller" 00:29:01.678 },{ 00:29:01.678 "params": { 00:29:01.678 "name": "Nvme9", 00:29:01.678 "trtype": "rdma", 00:29:01.678 "traddr": "192.168.100.8", 00:29:01.678 "adrfam": "ipv4", 00:29:01.678 "trsvcid": "4420", 00:29:01.678 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.678 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.678 "hdgst": false, 00:29:01.678 "ddgst": false 00:29:01.678 }, 00:29:01.678 "method": "bdev_nvme_attach_controller" 00:29:01.678 },{ 00:29:01.678 "params": { 00:29:01.678 "name": "Nvme10", 00:29:01.678 "trtype": "rdma", 00:29:01.678 "traddr": "192.168.100.8", 00:29:01.678 "adrfam": "ipv4", 00:29:01.678 "trsvcid": "4420", 00:29:01.678 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.678 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.678 "hdgst": false, 00:29:01.678 "ddgst": false 00:29:01.678 }, 00:29:01.678 "method": "bdev_nvme_attach_controller" 00:29:01.678 }' 00:29:01.678 [2024-11-06 15:33:29.297080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.937 [2024-11-06 15:33:29.408539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.313 Running I/O for 1 seconds... 00:29:04.250 3165.00 IOPS, 197.81 MiB/s 00:29:04.250 Latency(us) 00:29:04.250 [2024-11-06T14:33:31.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme1n1 : 1.20 321.32 20.08 0.00 0.00 196058.60 16640.45 227951.30 00:29:04.250 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme2n1 : 1.20 356.82 22.30 0.00 0.00 173914.27 3120.08 173242.99 00:29:04.250 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme3n1 : 1.20 347.20 21.70 0.00 0.00 175947.45 16754.42 169595.77 00:29:04.250 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme4n1 : 1.20 346.78 21.67 0.00 0.00 173621.08 17552.25 162301.33 00:29:04.250 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme5n1 : 1.20 326.53 20.41 0.00 0.00 180692.99 17096.35 155006.89 00:29:04.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme6n1 : 1.18 324.07 20.25 0.00 0.00 181595.57 17438.27 141329.81 00:29:04.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme7n1 : 1.20 322.15 20.13 0.00 0.00 177910.77 14075.99 126740.93 00:29:04.250 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme8n1 : 1.19 323.02 20.19 0.00 0.00 176675.02 15956.59 111696.14 00:29:04.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme9n1 : 1.19 322.41 20.15 0.00 0.00 174272.63 15956.59 121270.09 00:29:04.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.250 Verification LBA range: start 0x0 length 0x400 00:29:04.250 Nvme10n1 : 1.19 321.81 20.11 0.00 0.00 171570.16 15956.59 138594.39 00:29:04.250 [2024-11-06T14:33:31.885Z] =================================================================================================================== 00:29:04.250 [2024-11-06T14:33:31.885Z] Total : 3312.10 207.01 0.00 0.00 178128.11 3120.08 227951.30 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:05.629 rmmod nvme_rdma 00:29:05.629 rmmod nvme_fabrics 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3197465 ']' 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3197465 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3197465 ']' 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3197465 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3197465 00:29:05.629 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:05.630 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:05.630 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3197465' 00:29:05.630 killing process with pid 3197465 00:29:05.630 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3197465 00:29:05.630 15:33:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3197465 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:09.827 00:29:09.827 real 0m19.369s 00:29:09.827 user 0m52.523s 00:29:09.827 sys 0m7.028s 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.827 ************************************ 00:29:09.827 END TEST nvmf_shutdown_tc1 00:29:09.827 ************************************ 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.827 ************************************ 00:29:09.827 START TEST nvmf_shutdown_tc2 00:29:09.827 ************************************ 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.827 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:09.828 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:09.828 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:09.828 Found net devices under 0000:18:00.0: mlx_0_0 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:09.828 Found net devices under 0000:18:00.1: mlx_0_1 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:09.828 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:09.829 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:09.829 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:29:09.829 altname enp24s0f0np0 00:29:09.829 altname ens785f0np0 00:29:09.829 inet 192.168.100.8/24 scope global mlx_0_0 00:29:09.829 valid_lft forever preferred_lft forever 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:09.829 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:09.829 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:29:09.829 altname enp24s0f1np1 00:29:09.829 altname ens785f1np1 00:29:09.829 inet 192.168.100.9/24 scope global mlx_0_1 00:29:09.829 valid_lft forever preferred_lft forever 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:09.829 192.168.100.9' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:09.829 192.168.100.9' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:09.829 192.168.100.9' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3199284 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3199284 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3199284 ']' 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.829 15:33:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.829 [2024-11-06 15:33:37.043233] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:09.829 [2024-11-06 15:33:37.043336] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.829 [2024-11-06 15:33:37.195797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.830 [2024-11-06 15:33:37.308171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.830 [2024-11-06 15:33:37.308225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.830 [2024-11-06 15:33:37.308239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.830 [2024-11-06 15:33:37.308252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.830 [2024-11-06 15:33:37.308262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.830 [2024-11-06 15:33:37.310452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.830 [2024-11-06 15:33:37.310573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.830 [2024-11-06 15:33:37.310589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.830 [2024-11-06 15:33:37.310618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.398 15:33:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.398 [2024-11-06 15:33:37.928679] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f5f24b76940) succeed. 00:29:10.398 [2024-11-06 15:33:37.938286] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f5f24b2f940) succeed. 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.658 15:33:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.917 Malloc1 00:29:10.917 [2024-11-06 15:33:38.386700] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:10.917 Malloc2 00:29:10.917 Malloc3 00:29:11.176 Malloc4 00:29:11.176 Malloc5 00:29:11.436 Malloc6 00:29:11.436 Malloc7 00:29:11.436 Malloc8 00:29:11.695 Malloc9 00:29:11.695 Malloc10 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3199595 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3199595 /var/tmp/bdevperf.sock 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3199595 ']' 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.695 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.695 { 00:29:11.695 "params": { 00:29:11.695 "name": "Nvme$subsystem", 00:29:11.695 "trtype": "$TEST_TRANSPORT", 00:29:11.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.695 "adrfam": "ipv4", 00:29:11.695 "trsvcid": "$NVMF_PORT", 00:29:11.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.695 "hdgst": ${hdgst:-false}, 00:29:11.695 "ddgst": ${ddgst:-false} 00:29:11.695 }, 00:29:11.695 "method": "bdev_nvme_attach_controller" 00:29:11.695 } 00:29:11.695 EOF 00:29:11.695 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.696 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.696 { 00:29:11.696 "params": { 00:29:11.696 "name": "Nvme$subsystem", 00:29:11.696 "trtype": "$TEST_TRANSPORT", 00:29:11.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.696 "adrfam": "ipv4", 00:29:11.696 "trsvcid": "$NVMF_PORT", 00:29:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.696 "hdgst": ${hdgst:-false}, 00:29:11.696 "ddgst": ${ddgst:-false} 00:29:11.696 }, 00:29:11.696 "method": "bdev_nvme_attach_controller" 00:29:11.696 } 00:29:11.696 EOF 00:29:11.696 )") 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.956 { 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme$subsystem", 00:29:11.956 "trtype": "$TEST_TRANSPORT", 00:29:11.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "$NVMF_PORT", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.956 "hdgst": ${hdgst:-false}, 00:29:11.956 "ddgst": ${ddgst:-false} 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 } 00:29:11.956 EOF 00:29:11.956 )") 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.956 { 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme$subsystem", 00:29:11.956 "trtype": "$TEST_TRANSPORT", 00:29:11.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "$NVMF_PORT", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.956 "hdgst": ${hdgst:-false}, 00:29:11.956 "ddgst": ${ddgst:-false} 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 } 00:29:11.956 EOF 00:29:11.956 )") 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.956 { 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme$subsystem", 00:29:11.956 "trtype": "$TEST_TRANSPORT", 00:29:11.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "$NVMF_PORT", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.956 "hdgst": ${hdgst:-false}, 00:29:11.956 "ddgst": ${ddgst:-false} 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 } 00:29:11.956 EOF 00:29:11.956 )") 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:11.956 [2024-11-06 15:33:39.361921] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:11.956 [2024-11-06 15:33:39.362046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199595 ] 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:11.956 15:33:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme1", 00:29:11.956 "trtype": "rdma", 00:29:11.956 "traddr": "192.168.100.8", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "4420", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.956 "hdgst": false, 00:29:11.956 "ddgst": false 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 },{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme2", 00:29:11.956 "trtype": "rdma", 00:29:11.956 "traddr": "192.168.100.8", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "4420", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:11.956 "hdgst": false, 00:29:11.956 "ddgst": false 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 },{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme3", 00:29:11.956 "trtype": "rdma", 00:29:11.956 "traddr": "192.168.100.8", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "4420", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:11.956 "hdgst": false, 00:29:11.956 "ddgst": false 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 },{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme4", 00:29:11.956 "trtype": "rdma", 00:29:11.956 "traddr": "192.168.100.8", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "4420", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:11.956 "hdgst": false, 00:29:11.956 "ddgst": false 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 },{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme5", 00:29:11.956 "trtype": "rdma", 00:29:11.956 "traddr": "192.168.100.8", 00:29:11.956 "adrfam": "ipv4", 00:29:11.956 "trsvcid": "4420", 00:29:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:11.956 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:11.956 "hdgst": false, 00:29:11.956 "ddgst": false 00:29:11.956 }, 00:29:11.956 "method": "bdev_nvme_attach_controller" 00:29:11.956 },{ 00:29:11.956 "params": { 00:29:11.956 "name": "Nvme6", 00:29:11.956 "trtype": "rdma", 00:29:11.957 "traddr": "192.168.100.8", 00:29:11.957 "adrfam": "ipv4", 00:29:11.957 "trsvcid": "4420", 00:29:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:11.957 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:11.957 "hdgst": false, 00:29:11.957 "ddgst": false 00:29:11.957 }, 00:29:11.957 "method": "bdev_nvme_attach_controller" 00:29:11.957 },{ 00:29:11.957 "params": { 00:29:11.957 "name": "Nvme7", 00:29:11.957 "trtype": "rdma", 00:29:11.957 "traddr": "192.168.100.8", 00:29:11.957 "adrfam": "ipv4", 00:29:11.957 "trsvcid": "4420", 00:29:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:11.957 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:11.957 "hdgst": false, 00:29:11.957 "ddgst": false 00:29:11.957 }, 00:29:11.957 "method": "bdev_nvme_attach_controller" 00:29:11.957 },{ 00:29:11.957 "params": { 00:29:11.957 "name": "Nvme8", 00:29:11.957 "trtype": "rdma", 00:29:11.957 "traddr": "192.168.100.8", 00:29:11.957 "adrfam": "ipv4", 00:29:11.957 "trsvcid": "4420", 00:29:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:11.957 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:11.957 "hdgst": false, 00:29:11.957 "ddgst": false 00:29:11.957 }, 00:29:11.957 "method": "bdev_nvme_attach_controller" 00:29:11.957 },{ 00:29:11.957 "params": { 00:29:11.957 "name": "Nvme9", 00:29:11.957 "trtype": "rdma", 00:29:11.957 "traddr": "192.168.100.8", 00:29:11.957 "adrfam": "ipv4", 00:29:11.957 "trsvcid": "4420", 00:29:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:11.957 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:11.957 "hdgst": false, 00:29:11.957 "ddgst": false 00:29:11.957 }, 00:29:11.957 "method": "bdev_nvme_attach_controller" 00:29:11.957 },{ 00:29:11.957 "params": { 00:29:11.957 "name": "Nvme10", 00:29:11.957 "trtype": "rdma", 00:29:11.957 "traddr": "192.168.100.8", 00:29:11.957 "adrfam": "ipv4", 00:29:11.957 "trsvcid": "4420", 00:29:11.957 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:11.957 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:11.957 "hdgst": false, 00:29:11.957 "ddgst": false 00:29:11.957 }, 00:29:11.957 "method": "bdev_nvme_attach_controller" 00:29:11.957 }' 00:29:11.957 [2024-11-06 15:33:39.516972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.215 [2024-11-06 15:33:39.628180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.151 Running I/O for 10 seconds... 00:29:13.151 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.151 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:13.151 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:13.151 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.151 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.410 15:33:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.669 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.669 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=27 00:29:13.669 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 27 -ge 100 ']' 00:29:13.669 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.928 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=179 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 179 -ge 100 ']' 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3199595 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3199595 ']' 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3199595 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3199595 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3199595' 00:29:14.188 killing process with pid 3199595 00:29:14.188 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3199595 00:29:14.189 15:33:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3199595 00:29:14.189 Received shutdown signal, test time was about 0.951112 seconds 00:29:14.189 00:29:14.189 Latency(us) 00:29:14.189 [2024-11-06T14:33:41.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.189 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme1n1 : 0.93 326.70 20.42 0.00 0.00 192122.27 9630.94 253481.85 00:29:14.189 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme2n1 : 0.93 343.22 21.45 0.00 0.00 180199.07 11283.59 183272.85 00:29:14.189 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme3n1 : 0.93 342.65 21.42 0.00 0.00 176715.51 11682.50 175978.41 00:29:14.189 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme4n1 : 0.94 350.65 21.92 0.00 0.00 169425.33 5128.90 168683.97 00:29:14.189 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme5n1 : 0.94 341.41 21.34 0.00 0.00 171304.87 12594.31 157742.30 00:29:14.189 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme6n1 : 0.94 340.76 21.30 0.00 0.00 168177.40 13221.18 146800.64 00:29:14.189 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme7n1 : 0.94 340.22 21.26 0.00 0.00 164569.98 13506.11 139506.20 00:29:14.189 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme8n1 : 0.94 339.61 21.23 0.00 0.00 161907.76 13962.02 130388.15 00:29:14.189 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme9n1 : 0.94 338.94 21.18 0.00 0.00 159271.89 14588.88 118534.68 00:29:14.189 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:14.189 Verification LBA range: start 0x0 length 0x400 00:29:14.189 Nvme10n1 : 0.95 269.36 16.83 0.00 0.00 196275.98 5271.37 271717.95 00:29:14.189 [2024-11-06T14:33:41.824Z] =================================================================================================================== 00:29:14.189 [2024-11-06T14:33:41.824Z] Total : 3333.53 208.35 0.00 0.00 173436.76 5128.90 271717.95 00:29:15.567 15:33:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:16.505 rmmod nvme_rdma 00:29:16.505 rmmod nvme_fabrics 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3199284 ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3199284 ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3199284' 00:29:16.505 killing process with pid 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3199284 00:29:16.505 15:33:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3199284 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:20.702 00:29:20.702 real 0m10.871s 00:29:20.702 user 0m42.042s 00:29:20.702 sys 0m1.690s 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.702 ************************************ 00:29:20.702 END TEST nvmf_shutdown_tc2 00:29:20.702 ************************************ 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:20.702 ************************************ 00:29:20.702 START TEST nvmf_shutdown_tc3 00:29:20.702 ************************************ 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:20.702 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:20.702 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:20.702 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:20.703 Found net devices under 0000:18:00.0: mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:20.703 Found net devices under 0000:18:00.1: mlx_0_1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:20.703 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:20.703 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:29:20.703 altname enp24s0f0np0 00:29:20.703 altname ens785f0np0 00:29:20.703 inet 192.168.100.8/24 scope global mlx_0_0 00:29:20.703 valid_lft forever preferred_lft forever 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:20.703 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:20.703 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:29:20.703 altname enp24s0f1np1 00:29:20.703 altname ens785f1np1 00:29:20.703 inet 192.168.100.9/24 scope global mlx_0_1 00:29:20.703 valid_lft forever preferred_lft forever 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:20.703 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:20.704 192.168.100.9' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:20.704 192.168.100.9' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:20.704 192.168.100.9' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3200805 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3200805 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3200805 ']' 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:20.704 15:33:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:20.704 [2024-11-06 15:33:48.016180] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:20.704 [2024-11-06 15:33:48.016296] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.704 [2024-11-06 15:33:48.171037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.704 [2024-11-06 15:33:48.281720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.704 [2024-11-06 15:33:48.281775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.704 [2024-11-06 15:33:48.281788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.704 [2024-11-06 15:33:48.281800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.704 [2024-11-06 15:33:48.281810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.704 [2024-11-06 15:33:48.284110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.704 [2024-11-06 15:33:48.284214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.704 [2024-11-06 15:33:48.284276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.704 [2024-11-06 15:33:48.284264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.273 15:33:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.273 [2024-11-06 15:33:48.900279] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f162abbd940) succeed. 00:29:21.532 [2024-11-06 15:33:48.909910] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f162ab79940) succeed. 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.791 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.792 15:33:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:21.792 Malloc1 00:29:21.792 [2024-11-06 15:33:49.342143] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:21.792 Malloc2 00:29:22.051 Malloc3 00:29:22.051 Malloc4 00:29:22.310 Malloc5 00:29:22.310 Malloc6 00:29:22.310 Malloc7 00:29:22.569 Malloc8 00:29:22.569 Malloc9 00:29:22.828 Malloc10 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3201142 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3201142 /var/tmp/bdevperf.sock 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3201142 ']' 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:22.828 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.829 { 00:29:22.829 "params": { 00:29:22.829 "name": "Nvme$subsystem", 00:29:22.829 "trtype": "$TEST_TRANSPORT", 00:29:22.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.829 "adrfam": "ipv4", 00:29:22.829 "trsvcid": "$NVMF_PORT", 00:29:22.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.829 "hdgst": ${hdgst:-false}, 00:29:22.829 "ddgst": ${ddgst:-false} 00:29:22.829 }, 00:29:22.829 "method": "bdev_nvme_attach_controller" 00:29:22.829 } 00:29:22.829 EOF 00:29:22.829 )") 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:22.829 [2024-11-06 15:33:50.364849] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:22.829 [2024-11-06 15:33:50.364946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201142 ] 00:29:22.829 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:22.830 15:33:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme1", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme2", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme3", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme4", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme5", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme6", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme7", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme8", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme9", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 },{ 00:29:22.830 "params": { 00:29:22.830 "name": "Nvme10", 00:29:22.830 "trtype": "rdma", 00:29:22.830 "traddr": "192.168.100.8", 00:29:22.830 "adrfam": "ipv4", 00:29:22.830 "trsvcid": "4420", 00:29:22.830 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:22.830 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:22.830 "hdgst": false, 00:29:22.830 "ddgst": false 00:29:22.830 }, 00:29:22.830 "method": "bdev_nvme_attach_controller" 00:29:22.830 }' 00:29:23.089 [2024-11-06 15:33:50.514065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.089 [2024-11-06 15:33:50.631568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.468 Running I/O for 10 seconds... 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.468 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.469 15:33:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:24.728 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.728 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:24.728 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:24.728 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=155 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3200805 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3200805 ']' 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3200805 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:24.987 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200805 00:29:25.246 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:25.246 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:25.246 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200805' 00:29:25.246 killing process with pid 3200805 00:29:25.246 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3200805 00:29:25.246 15:33:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3200805 00:29:26.188 2581.00 IOPS, 161.31 MiB/s [2024-11-06T14:33:53.823Z] [2024-11-06 15:33:53.682231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.682311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.682331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.682344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.682362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.682374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.682388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.682401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.684933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.188 [2024-11-06 15:33:53.684965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:26.188 [2024-11-06 15:33:53.685009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.685025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.685040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.685053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.685067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.685080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.685093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.685105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.686996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.188 [2024-11-06 15:33:53.687017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:26.188 [2024-11-06 15:33:53.687041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.687055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.687069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.687082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.687096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.188 [2024-11-06 15:33:53.687108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.188 [2024-11-06 15:33:53.687122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.687142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.689116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.689142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.689169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.689183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.689197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.689210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.689223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.689235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.689248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.689261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.691652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.691678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.691708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.691727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.691746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.691762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.691781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.691798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.691815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.691831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.693886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.693912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.693951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.693969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.693988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.694004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.694022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.694038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.694059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.696152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.696177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.696208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.696226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.696245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.696279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.696296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.696313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.696329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.698769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.698795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.698823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:42e0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.698859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.698876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:42e0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.698894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.698910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:42e0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.698929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.698945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:42e0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.701301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.701326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.701355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.701377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.701395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.701412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.701430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.701446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.701464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.701480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.703655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.703687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.703727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.703753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:4ce0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.703777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.703799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:4ce0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.703822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.703845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:4ce0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.703868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.189 [2024-11-06 15:33:53.703890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32531 cdw0:0 sqhd:4ce0 p:0 m:0 dnr:0 00:29:26.189 [2024-11-06 15:33:53.706144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:26.189 [2024-11-06 15:33:53.706178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:26.189 [2024-11-06 15:33:53.708968] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:26.189 [2024-11-06 15:33:53.711342] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:26.189 [2024-11-06 15:33:53.714194] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:26.189 [2024-11-06 15:33:53.716791] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:26.189 [2024-11-06 15:33:53.719522] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:26.189 [2024-11-06 15:33:53.722258] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:26.190 [2024-11-06 15:33:53.725415] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:26.190 [2024-11-06 15:33:53.727981] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:26.190 [2024-11-06 15:33:53.728161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188dfcc0 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188cfc00 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188bfb40 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188afa80 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001889f9c0 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001888f900 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001887f840 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001886f780 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001885f6c0 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001884f600 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001883f540 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001882f480 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.728942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.728978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001881f3c0 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.729005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001880f300 len:0x10000 key:0x184000 00:29:26.190 [2024-11-06 15:33:53.729067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018beffc0 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bdff00 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bcfe40 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bbfd80 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bafcc0 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b9fc00 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b8fb40 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b7fa80 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b6f9c0 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b5f900 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b4f840 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b3f780 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b2f6c0 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.729938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b1f600 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.729965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.730000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b0f540 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.730027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.730063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aff480 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.730091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.190 [2024-11-06 15:33:53.730161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aef3c0 len:0x10000 key:0x183c00 00:29:26.190 [2024-11-06 15:33:53.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018adf300 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018acf240 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018abf180 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aaf0c0 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a9f000 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a8ef40 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a7ee80 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a6edc0 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a5ed00 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a4ec40 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a3eb80 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.730949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a2eac0 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.730977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a1ea00 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.731039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a0e940 len:0x10000 key:0x183c00 00:29:26.191 [2024-11-06 15:33:53.731100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018deffc0 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ddff00 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dcfe40 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dbfd80 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dafcc0 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d9fc00 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d8fb40 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d7fa80 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d6f9c0 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d5f900 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d4f840 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d3f780 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d2f6c0 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.731962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d1f600 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.731989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.732024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d0f540 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.732051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.732086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cff480 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cef3c0 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.732184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.732219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cdf300 len:0x10000 key:0x183300 00:29:26.191 [2024-11-06 15:33:53.732247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.732283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188efd80 len:0x10000 key:0x184000 00:29:26.191 [2024-11-06 15:33:53.732310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.736201] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:26.191 [2024-11-06 15:33:53.736260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edfcc0 len:0x10000 key:0x183700 00:29:26.191 [2024-11-06 15:33:53.736291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.191 [2024-11-06 15:33:53.736335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfc00 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfb40 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafa80 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9f9c0 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8f900 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7f840 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6f780 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5f6c0 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4f600 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3f540 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.736980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f480 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.737008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f3c0 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.737070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f300 len:0x10000 key:0x183700 00:29:26.192 [2024-11-06 15:33:53.737144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ccf240 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cbf180 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018caf0c0 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c9f000 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c8ef40 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c7ee80 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c6edc0 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c5ed00 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c4ec40 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c3eb80 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c2eac0 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c1ea00 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.737944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c0e940 len:0x10000 key:0x183300 00:29:26.192 [2024-11-06 15:33:53.737971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191effc0 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191dff00 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191cfe40 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191bfd80 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191afcc0 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001919fc00 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001918fb40 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001917fa80 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.192 [2024-11-06 15:33:53.738542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001916f9c0 len:0x10000 key:0x183a00 00:29:26.192 [2024-11-06 15:33:53.738570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001915f900 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001914f840 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001913f780 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001912f6c0 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001911f600 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001910f540 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.738980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ff480 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ef3c0 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190df300 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cf240 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bf180 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190af0c0 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909f000 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908ef40 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907ee80 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906edc0 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905ed00 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904ec40 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903eb80 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902eac0 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901ea00 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900e940 len:0x10000 key:0x183a00 00:29:26.193 [2024-11-06 15:33:53.739954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.739990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193effc0 len:0x10000 key:0x183600 00:29:26.193 [2024-11-06 15:33:53.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193dff00 len:0x10000 key:0x183600 00:29:26.193 [2024-11-06 15:33:53.740080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.740118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193cfe40 len:0x10000 key:0x183600 00:29:26.193 [2024-11-06 15:33:53.740163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193bfd80 len:0x10000 key:0x183600 00:29:26.193 [2024-11-06 15:33:53.740229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.740264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193afcc0 len:0x10000 key:0x183600 00:29:26.193 [2024-11-06 15:33:53.740291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.740327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eefd80 len:0x10000 key:0x183700 00:29:26.193 [2024-11-06 15:33:53.740354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32531 cdw0:0 sqhd:9d60 p:0 m:0 dnr:0 00:29:26.193 [2024-11-06 15:33:53.774858] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.774988] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775015] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775033] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775050] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775069] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775086] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775103] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775120] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775144] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.775161] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:26.194 [2024-11-06 15:33:53.782367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.782405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.783422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.783453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.783469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.783485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:26.194 [2024-11-06 15:33:53.786897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:26.194 task offset: 35840 on job bdev=Nvme1n1 fails 00:29:26.194 00:29:26.194 Latency(us) 00:29:26.194 [2024-11-06T14:33:53.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.194 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme1n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme1n1 : 1.99 128.75 8.05 32.19 0.00 394095.04 34648.60 1072282.94 00:29:26.194 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme2n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme2n1 : 1.99 128.69 8.04 32.17 0.00 390566.78 37156.06 1064988.49 00:29:26.194 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme3n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme3n1 : 1.99 132.15 8.26 32.16 0.00 378958.37 7094.98 1064988.49 00:29:26.194 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme4n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme4n1 : 1.99 139.12 8.69 32.14 0.00 360299.81 6040.71 1064988.49 00:29:26.194 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme5n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme5n1 : 1.99 132.53 8.28 32.13 0.00 371375.94 10884.67 1064988.49 00:29:26.194 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme6n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme6n1 : 1.99 136.49 8.53 32.11 0.00 359402.12 12822.26 1057694.05 00:29:26.194 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme7n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme7n1 : 1.99 132.92 8.31 32.10 0.00 363866.92 18692.01 1057694.05 00:29:26.194 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme8n1 ended in about 1.99 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme8n1 : 1.99 140.38 8.77 32.09 0.00 345056.98 21769.35 1057694.05 00:29:26.194 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme9n1 ended in about 1.95 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme9n1 : 1.95 131.59 8.22 32.90 0.00 359550.44 61090.95 1094166.26 00:29:26.194 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.194 Job: Nvme10n1 ended in about 1.95 seconds with error 00:29:26.194 Verification LBA range: start 0x0 length 0x400 00:29:26.194 Nvme10n1 : 1.95 98.29 6.14 32.76 0.00 446942.16 62914.56 1079577.38 00:29:26.194 [2024-11-06T14:33:53.829Z] =================================================================================================================== 00:29:26.194 [2024-11-06T14:33:53.829Z] Total : 1300.89 81.31 322.75 0.00 375140.94 6040.71 1094166.26 00:29:26.454 [2024-11-06 15:33:53.883822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:26.454 [2024-11-06 15:33:53.883908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:26.454 [2024-11-06 15:33:53.883942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:26.454 [2024-11-06 15:33:53.883966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:26.454 [2024-11-06 15:33:53.894309] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.894342] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.894355] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:29:26.454 [2024-11-06 15:33:53.894488] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.894503] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.894513] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200007fff240 00:29:26.454 [2024-11-06 15:33:53.899351] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.899378] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.899390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fbb200 00:29:26.454 [2024-11-06 15:33:53.899490] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.899505] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.899515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd3dc0 00:29:26.454 [2024-11-06 15:33:53.899594] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.899608] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.899619] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fbe180 00:29:26.454 [2024-11-06 15:33:53.899710] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.899724] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.899734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fc7500 00:29:26.454 [2024-11-06 15:33:53.900724] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.900743] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.900754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fa3bc0 00:29:26.454 [2024-11-06 15:33:53.900829] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.900843] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.900854] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f83a80 00:29:26.454 [2024-11-06 15:33:53.900931] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.900946] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.900957] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f8ee00 00:29:26.454 [2024-11-06 15:33:53.901056] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.454 [2024-11-06 15:33:53.901074] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.454 [2024-11-06 15:33:53.901084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f8e680 00:29:27.392 [2024-11-06 15:33:54.898999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.899056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.900326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.900346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.900399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.900414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.900429] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.900449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.900478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.900490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.900502] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.900515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.903332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.903358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.904699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.904718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.906204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.906222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.907435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.907452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.908572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.908589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.909857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.909873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.911259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.911286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.912500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:27.392 [2024-11-06 15:33:54.912522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:27.392 [2024-11-06 15:33:54.912537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.912569] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.912588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.912612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.912642] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.912658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.912681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.912711] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.912726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.912744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.912772] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.912788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.912910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.912943] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.912959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.912978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.912994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.913008] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.913024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.913043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.913058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.913077] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.913093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:27.392 [2024-11-06 15:33:54.913111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:27.392 [2024-11-06 15:33:54.913162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:27.392 [2024-11-06 15:33:54.913178] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:29:27.392 [2024-11-06 15:33:54.913194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:28.772 15:33:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3201142 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3201142 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3201142 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.710 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:29.711 rmmod nvme_rdma 00:29:29.711 rmmod nvme_fabrics 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3200805 ']' 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3200805 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3200805 ']' 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3200805 00:29:29.711 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3200805) - No such process 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3200805 is not found' 00:29:29.711 Process with pid 3200805 is not found 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:29.711 00:29:29.711 real 0m9.685s 00:29:29.711 user 0m34.853s 00:29:29.711 sys 0m2.002s 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:29.711 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.711 ************************************ 00:29:29.711 END TEST nvmf_shutdown_tc3 00:29:29.711 ************************************ 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:29.972 ************************************ 00:29:29.972 START TEST nvmf_shutdown_tc4 00:29:29.972 ************************************ 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:29.972 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:29.972 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:29.972 Found net devices under 0000:18:00.0: mlx_0_0 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.972 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:29.973 Found net devices under 0000:18:00.1: mlx_0_1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:29.973 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:29.973 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:29:29.973 altname enp24s0f0np0 00:29:29.973 altname ens785f0np0 00:29:29.973 inet 192.168.100.8/24 scope global mlx_0_0 00:29:29.973 valid_lft forever preferred_lft forever 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:29.973 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:29.973 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:29:29.973 altname enp24s0f1np1 00:29:29.973 altname ens785f1np1 00:29:29.973 inet 192.168.100.9/24 scope global mlx_0_1 00:29:29.973 valid_lft forever preferred_lft forever 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:29.973 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:30.234 192.168.100.9' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:30.234 192.168.100.9' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:30.234 192.168.100.9' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3202164 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3202164 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3202164 ']' 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:30.234 15:33:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:30.234 [2024-11-06 15:33:57.789815] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:30.234 [2024-11-06 15:33:57.789925] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.494 [2024-11-06 15:33:57.947070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.494 [2024-11-06 15:33:58.059001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.494 [2024-11-06 15:33:58.059060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.494 [2024-11-06 15:33:58.059074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.494 [2024-11-06 15:33:58.059088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.494 [2024-11-06 15:33:58.059102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.494 [2024-11-06 15:33:58.061514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.494 [2024-11-06 15:33:58.061602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.494 [2024-11-06 15:33:58.061664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.494 [2024-11-06 15:33:58.061690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.062 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:31.062 [2024-11-06 15:33:58.670223] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f65a736a940) succeed. 00:29:31.062 [2024-11-06 15:33:58.679801] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f65a7326940) succeed. 00:29:31.630 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.630 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.631 15:33:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:31.631 Malloc1 00:29:31.631 [2024-11-06 15:33:59.125096] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:31.631 Malloc2 00:29:31.890 Malloc3 00:29:31.890 Malloc4 00:29:31.890 Malloc5 00:29:32.150 Malloc6 00:29:32.150 Malloc7 00:29:32.410 Malloc8 00:29:32.410 Malloc9 00:29:32.410 Malloc10 00:29:32.410 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.410 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:32.410 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.410 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:32.669 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3202577 00:29:32.669 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:29:32.669 15:34:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:32.669 [2024-11-06 15:34:00.196587] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:37.944 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.944 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3202164 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3202164 ']' 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3202164 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3202164 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3202164' 00:29:37.945 killing process with pid 3202164 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3202164 00:29:37.945 15:34:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3202164 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:37.945 NVMe io qpair process completion error 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 starting I/O failed: -6 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.885 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 [2024-11-06 15:34:06.305512] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 starting I/O failed: -6 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.886 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 starting I/O failed: -6 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 starting I/O failed: -6 00:29:38.887 [2024-11-06 15:34:06.330535] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 starting I/O failed: -6 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 starting I/O failed: -6 00:29:38.887 [2024-11-06 15:34:06.356440] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:29:38.887 Write completed with error (sct=0, sc=8) 00:29:38.887 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 [2024-11-06 15:34:06.380915] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 starting I/O failed: -6 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.888 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 [2024-11-06 15:34:06.403501] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 starting I/O failed: -6 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.889 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 [2024-11-06 15:34:06.428122] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 starting I/O failed: -6 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.890 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 starting I/O failed: -6 00:29:38.891 [2024-11-06 15:34:06.453623] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 starting I/O failed: -6 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 starting I/O failed: -6 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 starting I/O failed: -6 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 starting I/O failed: -6 00:29:38.891 [2024-11-06 15:34:06.480375] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Submitting Keep Alive failed 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.891 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 [2024-11-06 15:34:06.507481] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 starting I/O failed: -6 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:38.892 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 Write completed with error (sct=0, sc=8) 00:29:39.153 [2024-11-06 15:34:06.532017] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:39.153 Initializing NVMe Controllers 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.153 Controller IO queue size 128, less than required. 00:29:39.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:39.153 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.153 Initialization complete. Launching workers. 00:29:39.153 ======================================================== 00:29:39.153 Latency(us) 00:29:39.153 Device Information : IOPS MiB/s Average min max 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1324.49 56.91 96691.44 55801.11 1236451.35 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1338.91 57.53 95881.78 12689.97 1234170.81 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1348.40 57.94 95479.44 3264.96 1225514.60 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1318.05 56.63 97961.73 56186.98 1313457.48 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1313.64 56.45 98568.09 54516.71 1347189.86 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1319.24 56.69 98396.02 59238.92 1336988.39 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1321.44 56.78 98511.61 16720.35 1358465.05 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1371.29 58.92 95215.76 119.39 1248299.39 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1349.08 57.97 97079.44 189.10 1322493.15 00:29:39.154 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1293.80 55.59 101500.65 60011.90 1474218.50 00:29:39.154 ======================================================== 00:29:39.154 Total : 13298.33 571.41 97503.36 119.39 1474218.50 00:29:39.154 00:29:39.154 [2024-11-06 15:34:06.555375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.555410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.557560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.557582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.559515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.559536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.561328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.561348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.563312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.563336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.565310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.565334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.567207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.567232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.569178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.569206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.571002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.571026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:39.154 [2024-11-06 15:34:06.604757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.154 [2024-11-06 15:34:06.604780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:39.154 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:41.690 15:34:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3202577 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3202577 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3202577 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.262 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:42.262 rmmod nvme_rdma 00:29:42.524 rmmod nvme_fabrics 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3202164 ']' 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3202164 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3202164 ']' 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3202164 00:29:42.524 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3202164) - No such process 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3202164 is not found' 00:29:42.524 Process with pid 3202164 is not found 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:42.524 00:29:42.524 real 0m12.519s 00:29:42.524 user 0m46.872s 00:29:42.524 sys 0m1.659s 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.524 ************************************ 00:29:42.524 END TEST nvmf_shutdown_tc4 00:29:42.524 ************************************ 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:42.524 00:29:42.524 real 0m53.041s 00:29:42.524 user 2m56.548s 00:29:42.524 sys 0m12.765s 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:42.524 15:34:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.524 ************************************ 00:29:42.524 END TEST nvmf_shutdown 00:29:42.524 ************************************ 00:29:42.524 15:34:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:42.524 15:34:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:42.524 15:34:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:42.524 15:34:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:42.524 ************************************ 00:29:42.524 START TEST nvmf_nsid 00:29:42.524 ************************************ 00:29:42.525 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:29:42.785 * Looking for test storage... 00:29:42.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:42.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.785 --rc genhtml_branch_coverage=1 00:29:42.785 --rc genhtml_function_coverage=1 00:29:42.785 --rc genhtml_legend=1 00:29:42.785 --rc geninfo_all_blocks=1 00:29:42.785 --rc geninfo_unexecuted_blocks=1 00:29:42.785 00:29:42.785 ' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:42.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.785 --rc genhtml_branch_coverage=1 00:29:42.785 --rc genhtml_function_coverage=1 00:29:42.785 --rc genhtml_legend=1 00:29:42.785 --rc geninfo_all_blocks=1 00:29:42.785 --rc geninfo_unexecuted_blocks=1 00:29:42.785 00:29:42.785 ' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:42.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.785 --rc genhtml_branch_coverage=1 00:29:42.785 --rc genhtml_function_coverage=1 00:29:42.785 --rc genhtml_legend=1 00:29:42.785 --rc geninfo_all_blocks=1 00:29:42.785 --rc geninfo_unexecuted_blocks=1 00:29:42.785 00:29:42.785 ' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:42.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.785 --rc genhtml_branch_coverage=1 00:29:42.785 --rc genhtml_function_coverage=1 00:29:42.785 --rc genhtml_legend=1 00:29:42.785 --rc geninfo_all_blocks=1 00:29:42.785 --rc geninfo_unexecuted_blocks=1 00:29:42.785 00:29:42.785 ' 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.785 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.786 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.786 15:34:10 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:49.358 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:49.358 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:49.358 Found net devices under 0000:18:00.0: mlx_0_0 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:49.358 Found net devices under 0000:18:00.1: mlx_0_1 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:49.358 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:49.359 15:34:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:49.619 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.619 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:29:49.619 altname enp24s0f0np0 00:29:49.619 altname ens785f0np0 00:29:49.619 inet 192.168.100.8/24 scope global mlx_0_0 00:29:49.619 valid_lft forever preferred_lft forever 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:49.619 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.619 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:29:49.619 altname enp24s0f1np1 00:29:49.619 altname ens785f1np1 00:29:49.619 inet 192.168.100.9/24 scope global mlx_0_1 00:29:49.619 valid_lft forever preferred_lft forever 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:49.619 192.168.100.9' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:49.619 192.168.100.9' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:49.619 192.168.100.9' 00:29:49.619 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3206758 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3206758 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3206758 ']' 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:49.620 15:34:17 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:49.882 [2024-11-06 15:34:17.273521] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:49.882 [2024-11-06 15:34:17.273633] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.882 [2024-11-06 15:34:17.423284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.164 [2024-11-06 15:34:17.534119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.164 [2024-11-06 15:34:17.534173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.164 [2024-11-06 15:34:17.534187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.164 [2024-11-06 15:34:17.534201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.164 [2024-11-06 15:34:17.534212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.164 [2024-11-06 15:34:17.535516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.490 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:50.490 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:29:50.490 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.490 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.490 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:50.748 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3206802 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=40e428a5-83af-4779-ab4d-8b2ac804850d 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=9ac0c8d1-0cf0-425c-bf65-13cea9459f98 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d37f271a-6003-4069-b9c4-2c8292740cc0 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:50.749 null0 00:29:50.749 null1 00:29:50.749 null2 00:29:50.749 [2024-11-06 15:34:18.201368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029a40/0x7fc30b6a6940) succeed. 00:29:50.749 [2024-11-06 15:34:18.208171] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:50.749 [2024-11-06 15:34:18.208281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206802 ] 00:29:50.749 [2024-11-06 15:34:18.210809] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029bc0/0x7fc30b662940) succeed. 00:29:50.749 [2024-11-06 15:34:18.299680] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3206802 /var/tmp/tgt2.sock 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3206802 ']' 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.749 15:34:18 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:50.749 [2024-11-06 15:34:18.355812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.008 [2024-11-06 15:34:18.468912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.945 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:51.945 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:29:51.945 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:52.205 [2024-11-06 15:34:19.605371] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7fb235f48940) succeed. 00:29:52.205 [2024-11-06 15:34:19.616901] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7fb23511a940) succeed. 00:29:52.205 [2024-11-06 15:34:19.696998] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:52.205 nvme0n1 nvme0n2 00:29:52.205 nvme1n1 00:29:52.205 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:52.205 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:52.205 15:34:19 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 40e428a5-83af-4779-ab4d-8b2ac804850d 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=40e428a583af4779ab4d8b2ac804850d 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 40E428A583AF4779AB4D8B2AC804850D 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 40E428A583AF4779AB4D8B2AC804850D == \4\0\E\4\2\8\A\5\8\3\A\F\4\7\7\9\A\B\4\D\8\B\2\A\C\8\0\4\8\5\0\D ]] 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 9ac0c8d1-0cf0-425c-bf65-13cea9459f98 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:57.532 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9ac0c8d10cf0425cbf6513cea9459f98 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9AC0C8D10CF0425CBF6513CEA9459F98 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 9AC0C8D10CF0425CBF6513CEA9459F98 == \9\A\C\0\C\8\D\1\0\C\F\0\4\2\5\C\B\F\6\5\1\3\C\E\A\9\4\5\9\F\9\8 ]] 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d37f271a-6003-4069-b9c4-2c8292740cc0 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d37f271a60034069b9c42c8292740cc0 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D37F271A60034069B9C42C8292740CC0 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D37F271A60034069B9C42C8292740CC0 == \D\3\7\F\2\7\1\A\6\0\0\3\4\0\6\9\B\9\C\4\2\C\8\2\9\2\7\4\0\C\C\0 ]] 00:29:57.533 15:34:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3206802 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3206802 ']' 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3206802 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3206802 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3206802' 00:30:01.727 killing process with pid 3206802 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3206802 00:30:01.727 15:34:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3206802 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:04.262 rmmod nvme_rdma 00:30:04.262 rmmod nvme_fabrics 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3206758 ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3206758 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3206758 ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3206758 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3206758 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3206758' 00:30:04.262 killing process with pid 3206758 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3206758 00:30:04.262 15:34:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3206758 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:05.200 00:30:05.200 real 0m22.704s 00:30:05.200 user 0m30.503s 00:30:05.200 sys 0m6.869s 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:05.200 ************************************ 00:30:05.200 END TEST nvmf_nsid 00:30:05.200 ************************************ 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:05.200 00:30:05.200 real 17m10.866s 00:30:05.200 user 51m29.577s 00:30:05.200 sys 3m27.837s 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:05.200 15:34:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:05.200 ************************************ 00:30:05.200 END TEST nvmf_target_extra 00:30:05.200 ************************************ 00:30:05.459 15:34:32 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:30:05.459 15:34:32 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:05.459 15:34:32 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:05.459 15:34:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:05.459 ************************************ 00:30:05.459 START TEST nvmf_host 00:30:05.459 ************************************ 00:30:05.459 15:34:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:30:05.459 * Looking for test storage... 00:30:05.459 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.459 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:05.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.718 --rc genhtml_branch_coverage=1 00:30:05.718 --rc genhtml_function_coverage=1 00:30:05.718 --rc genhtml_legend=1 00:30:05.718 --rc geninfo_all_blocks=1 00:30:05.718 --rc geninfo_unexecuted_blocks=1 00:30:05.718 00:30:05.718 ' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:05.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.718 --rc genhtml_branch_coverage=1 00:30:05.718 --rc genhtml_function_coverage=1 00:30:05.718 --rc genhtml_legend=1 00:30:05.718 --rc geninfo_all_blocks=1 00:30:05.718 --rc geninfo_unexecuted_blocks=1 00:30:05.718 00:30:05.718 ' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:05.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.718 --rc genhtml_branch_coverage=1 00:30:05.718 --rc genhtml_function_coverage=1 00:30:05.718 --rc genhtml_legend=1 00:30:05.718 --rc geninfo_all_blocks=1 00:30:05.718 --rc geninfo_unexecuted_blocks=1 00:30:05.718 00:30:05.718 ' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:05.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.718 --rc genhtml_branch_coverage=1 00:30:05.718 --rc genhtml_function_coverage=1 00:30:05.718 --rc genhtml_legend=1 00:30:05.718 --rc geninfo_all_blocks=1 00:30:05.718 --rc geninfo_unexecuted_blocks=1 00:30:05.718 00:30:05.718 ' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:05.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.718 ************************************ 00:30:05.718 START TEST nvmf_multicontroller 00:30:05.718 ************************************ 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:05.718 * Looking for test storage... 00:30:05.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:05.718 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:05.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.977 --rc genhtml_branch_coverage=1 00:30:05.977 --rc genhtml_function_coverage=1 00:30:05.977 --rc genhtml_legend=1 00:30:05.977 --rc geninfo_all_blocks=1 00:30:05.977 --rc geninfo_unexecuted_blocks=1 00:30:05.977 00:30:05.977 ' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:05.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.977 --rc genhtml_branch_coverage=1 00:30:05.977 --rc genhtml_function_coverage=1 00:30:05.977 --rc genhtml_legend=1 00:30:05.977 --rc geninfo_all_blocks=1 00:30:05.977 --rc geninfo_unexecuted_blocks=1 00:30:05.977 00:30:05.977 ' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:05.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.977 --rc genhtml_branch_coverage=1 00:30:05.977 --rc genhtml_function_coverage=1 00:30:05.977 --rc genhtml_legend=1 00:30:05.977 --rc geninfo_all_blocks=1 00:30:05.977 --rc geninfo_unexecuted_blocks=1 00:30:05.977 00:30:05.977 ' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:05.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.977 --rc genhtml_branch_coverage=1 00:30:05.977 --rc genhtml_function_coverage=1 00:30:05.977 --rc genhtml_legend=1 00:30:05.977 --rc geninfo_all_blocks=1 00:30:05.977 --rc geninfo_unexecuted_blocks=1 00:30:05.977 00:30:05.977 ' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.977 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:05.978 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:05.978 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:30:05.978 00:30:05.978 real 0m0.234s 00:30:05.978 user 0m0.127s 00:30:05.978 sys 0m0.125s 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.978 ************************************ 00:30:05.978 END TEST nvmf_multicontroller 00:30:05.978 ************************************ 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.978 ************************************ 00:30:05.978 START TEST nvmf_aer 00:30:05.978 ************************************ 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:05.978 * Looking for test storage... 00:30:05.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.978 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.237 --rc genhtml_branch_coverage=1 00:30:06.237 --rc genhtml_function_coverage=1 00:30:06.237 --rc genhtml_legend=1 00:30:06.237 --rc geninfo_all_blocks=1 00:30:06.237 --rc geninfo_unexecuted_blocks=1 00:30:06.237 00:30:06.237 ' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.237 --rc genhtml_branch_coverage=1 00:30:06.237 --rc genhtml_function_coverage=1 00:30:06.237 --rc genhtml_legend=1 00:30:06.237 --rc geninfo_all_blocks=1 00:30:06.237 --rc geninfo_unexecuted_blocks=1 00:30:06.237 00:30:06.237 ' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.237 --rc genhtml_branch_coverage=1 00:30:06.237 --rc genhtml_function_coverage=1 00:30:06.237 --rc genhtml_legend=1 00:30:06.237 --rc geninfo_all_blocks=1 00:30:06.237 --rc geninfo_unexecuted_blocks=1 00:30:06.237 00:30:06.237 ' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.237 --rc genhtml_branch_coverage=1 00:30:06.237 --rc genhtml_function_coverage=1 00:30:06.237 --rc genhtml_legend=1 00:30:06.237 --rc geninfo_all_blocks=1 00:30:06.237 --rc geninfo_unexecuted_blocks=1 00:30:06.237 00:30:06.237 ' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.237 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:06.238 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.238 15:34:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:12.809 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:12.809 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:12.809 Found net devices under 0000:18:00.0: mlx_0_0 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:12.809 Found net devices under 0000:18:00.1: mlx_0_1 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:12.809 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.069 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:13.070 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:13.070 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:30:13.070 altname enp24s0f0np0 00:30:13.070 altname ens785f0np0 00:30:13.070 inet 192.168.100.8/24 scope global mlx_0_0 00:30:13.070 valid_lft forever preferred_lft forever 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:13.070 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:13.070 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:30:13.070 altname enp24s0f1np1 00:30:13.070 altname ens785f1np1 00:30:13.070 inet 192.168.100.9/24 scope global mlx_0_1 00:30:13.070 valid_lft forever preferred_lft forever 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:13.070 192.168.100.9' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:13.070 192.168.100.9' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:13.070 192.168.100.9' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3211863 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3211863 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3211863 ']' 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.070 15:34:40 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:13.329 [2024-11-06 15:34:40.725294] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:13.329 [2024-11-06 15:34:40.725408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.329 [2024-11-06 15:34:40.878879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.589 [2024-11-06 15:34:40.996216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.589 [2024-11-06 15:34:40.996268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.589 [2024-11-06 15:34:40.996282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.589 [2024-11-06 15:34:40.996295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.589 [2024-11-06 15:34:40.996306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.589 [2024-11-06 15:34:40.998638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.589 [2024-11-06 15:34:40.998724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.589 [2024-11-06 15:34:40.998744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.589 [2024-11-06 15:34:40.998773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.157 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 [2024-11-06 15:34:41.626164] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5100576940) succeed. 00:30:14.157 [2024-11-06 15:34:41.635726] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5100532940) succeed. 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.417 Malloc0 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.417 15:34:41 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.417 [2024-11-06 15:34:42.004080] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.417 [ 00:30:14.417 { 00:30:14.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:14.417 "subtype": "Discovery", 00:30:14.417 "listen_addresses": [], 00:30:14.417 "allow_any_host": true, 00:30:14.417 "hosts": [] 00:30:14.417 }, 00:30:14.417 { 00:30:14.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.417 "subtype": "NVMe", 00:30:14.417 "listen_addresses": [ 00:30:14.417 { 00:30:14.417 "trtype": "RDMA", 00:30:14.417 "adrfam": "IPv4", 00:30:14.417 "traddr": "192.168.100.8", 00:30:14.417 "trsvcid": "4420" 00:30:14.417 } 00:30:14.417 ], 00:30:14.417 "allow_any_host": true, 00:30:14.417 "hosts": [], 00:30:14.417 "serial_number": "SPDK00000000000001", 00:30:14.417 "model_number": "SPDK bdev Controller", 00:30:14.417 "max_namespaces": 2, 00:30:14.417 "min_cntlid": 1, 00:30:14.417 "max_cntlid": 65519, 00:30:14.417 "namespaces": [ 00:30:14.417 { 00:30:14.417 "nsid": 1, 00:30:14.417 "bdev_name": "Malloc0", 00:30:14.417 "name": "Malloc0", 00:30:14.417 "nguid": "3C1A07936F5F490C8A21184CD70A38DF", 00:30:14.417 "uuid": "3c1a0793-6f5f-490c-8a21-184cd70a38df" 00:30:14.417 } 00:30:14.417 ] 00:30:14.417 } 00:30:14.417 ] 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3212078 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:30:14.417 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:30:14.677 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.936 Malloc1 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:14.936 [ 00:30:14.936 { 00:30:14.936 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:14.936 "subtype": "Discovery", 00:30:14.936 "listen_addresses": [], 00:30:14.936 "allow_any_host": true, 00:30:14.936 "hosts": [] 00:30:14.936 }, 00:30:14.936 { 00:30:14.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.936 "subtype": "NVMe", 00:30:14.936 "listen_addresses": [ 00:30:14.936 { 00:30:14.936 "trtype": "RDMA", 00:30:14.936 "adrfam": "IPv4", 00:30:14.936 "traddr": "192.168.100.8", 00:30:14.936 "trsvcid": "4420" 00:30:14.936 } 00:30:14.936 ], 00:30:14.936 "allow_any_host": true, 00:30:14.936 "hosts": [], 00:30:14.936 "serial_number": "SPDK00000000000001", 00:30:14.936 "model_number": "SPDK bdev Controller", 00:30:14.936 "max_namespaces": 2, 00:30:14.936 "min_cntlid": 1, 00:30:14.936 "max_cntlid": 65519, 00:30:14.936 "namespaces": [ 00:30:14.936 { 00:30:14.936 "nsid": 1, 00:30:14.936 "bdev_name": "Malloc0", 00:30:14.936 "name": "Malloc0", 00:30:14.936 "nguid": "3C1A07936F5F490C8A21184CD70A38DF", 00:30:14.936 "uuid": "3c1a0793-6f5f-490c-8a21-184cd70a38df" 00:30:14.936 }, 00:30:14.936 { 00:30:14.936 "nsid": 2, 00:30:14.936 "bdev_name": "Malloc1", 00:30:14.936 "name": "Malloc1", 00:30:14.936 "nguid": "66F7004EC40F47A08AC120914A8CFFD6", 00:30:14.936 "uuid": "66f7004e-c40f-47a0-8ac1-20914a8cffd6" 00:30:14.936 } 00:30:14.936 ] 00:30:14.936 } 00:30:14.936 ] 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.936 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3212078 00:30:15.196 Asynchronous Event Request test 00:30:15.196 Attaching to 192.168.100.8 00:30:15.196 Attached to 192.168.100.8 00:30:15.196 Registering asynchronous event callbacks... 00:30:15.196 Starting namespace attribute notice tests for all controllers... 00:30:15.196 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:15.196 aer_cb - Changed Namespace 00:30:15.196 Cleaning up... 00:30:15.196 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:15.196 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.196 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:15.456 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.456 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:15.456 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.456 15:34:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.456 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:15.456 rmmod nvme_rdma 00:30:15.456 rmmod nvme_fabrics 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3211863 ']' 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3211863 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3211863 ']' 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3211863 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3211863 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3211863' 00:30:15.716 killing process with pid 3211863 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3211863 00:30:15.716 15:34:43 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3211863 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:17.623 00:30:17.623 real 0m11.382s 00:30:17.623 user 0m15.540s 00:30:17.623 sys 0m6.248s 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.623 ************************************ 00:30:17.623 END TEST nvmf_aer 00:30:17.623 ************************************ 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.623 ************************************ 00:30:17.623 START TEST nvmf_async_init 00:30:17.623 ************************************ 00:30:17.623 15:34:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:17.623 * Looking for test storage... 00:30:17.623 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.623 --rc genhtml_branch_coverage=1 00:30:17.623 --rc genhtml_function_coverage=1 00:30:17.623 --rc genhtml_legend=1 00:30:17.623 --rc geninfo_all_blocks=1 00:30:17.623 --rc geninfo_unexecuted_blocks=1 00:30:17.623 00:30:17.623 ' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.623 --rc genhtml_branch_coverage=1 00:30:17.623 --rc genhtml_function_coverage=1 00:30:17.623 --rc genhtml_legend=1 00:30:17.623 --rc geninfo_all_blocks=1 00:30:17.623 --rc geninfo_unexecuted_blocks=1 00:30:17.623 00:30:17.623 ' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.623 --rc genhtml_branch_coverage=1 00:30:17.623 --rc genhtml_function_coverage=1 00:30:17.623 --rc genhtml_legend=1 00:30:17.623 --rc geninfo_all_blocks=1 00:30:17.623 --rc geninfo_unexecuted_blocks=1 00:30:17.623 00:30:17.623 ' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.623 --rc genhtml_branch_coverage=1 00:30:17.623 --rc genhtml_function_coverage=1 00:30:17.623 --rc genhtml_legend=1 00:30:17.623 --rc geninfo_all_blocks=1 00:30:17.623 --rc geninfo_unexecuted_blocks=1 00:30:17.623 00:30:17.623 ' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.623 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:17.624 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a10829ba1e3841a786eebe0daf8eaccb 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.624 15:34:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:24.196 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:24.197 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:24.197 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:24.197 Found net devices under 0000:18:00.0: mlx_0_0 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:24.197 Found net devices under 0000:18:00.1: mlx_0_1 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:24.197 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:24.457 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:24.458 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:24.458 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:30:24.458 altname enp24s0f0np0 00:30:24.458 altname ens785f0np0 00:30:24.458 inet 192.168.100.8/24 scope global mlx_0_0 00:30:24.458 valid_lft forever preferred_lft forever 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:24.458 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:24.458 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:30:24.458 altname enp24s0f1np1 00:30:24.458 altname ens785f1np1 00:30:24.458 inet 192.168.100.9/24 scope global mlx_0_1 00:30:24.458 valid_lft forever preferred_lft forever 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:24.458 15:34:51 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:24.458 192.168.100.9' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:24.458 192.168.100.9' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:24.458 192.168.100.9' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3215347 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3215347 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3215347 ']' 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:24.458 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:24.718 [2024-11-06 15:34:52.175180] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:24.718 [2024-11-06 15:34:52.175303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.718 [2024-11-06 15:34:52.322498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.977 [2024-11-06 15:34:52.429577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.977 [2024-11-06 15:34:52.429639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.977 [2024-11-06 15:34:52.429652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.977 [2024-11-06 15:34:52.429666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.977 [2024-11-06 15:34:52.429675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.977 [2024-11-06 15:34:52.431020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.546 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:25.546 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:30:25.546 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:25.546 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.546 15:34:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.546 [2024-11-06 15:34:53.062799] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f58f93bd940) succeed. 00:30:25.546 [2024-11-06 15:34:53.071832] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f58f9379940) succeed. 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.546 null0 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a10829ba1e3841a786eebe0daf8eaccb 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.546 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.805 [2024-11-06 15:34:53.189363] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.805 nvme0n1 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.805 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.805 [ 00:30:25.805 { 00:30:25.805 "name": "nvme0n1", 00:30:25.805 "aliases": [ 00:30:25.805 "a10829ba-1e38-41a7-86ee-be0daf8eaccb" 00:30:25.805 ], 00:30:25.805 "product_name": "NVMe disk", 00:30:25.805 "block_size": 512, 00:30:25.805 "num_blocks": 2097152, 00:30:25.805 "uuid": "a10829ba-1e38-41a7-86ee-be0daf8eaccb", 00:30:25.805 "numa_id": 0, 00:30:25.805 "assigned_rate_limits": { 00:30:25.805 "rw_ios_per_sec": 0, 00:30:25.805 "rw_mbytes_per_sec": 0, 00:30:25.805 "r_mbytes_per_sec": 0, 00:30:25.805 "w_mbytes_per_sec": 0 00:30:25.805 }, 00:30:25.805 "claimed": false, 00:30:25.805 "zoned": false, 00:30:25.805 "supported_io_types": { 00:30:25.805 "read": true, 00:30:25.805 "write": true, 00:30:25.805 "unmap": false, 00:30:25.805 "flush": true, 00:30:25.805 "reset": true, 00:30:25.805 "nvme_admin": true, 00:30:25.805 "nvme_io": true, 00:30:25.805 "nvme_io_md": false, 00:30:25.805 "write_zeroes": true, 00:30:25.805 "zcopy": false, 00:30:25.805 "get_zone_info": false, 00:30:25.805 "zone_management": false, 00:30:25.805 "zone_append": false, 00:30:25.805 "compare": true, 00:30:25.805 "compare_and_write": true, 00:30:25.805 "abort": true, 00:30:25.805 "seek_hole": false, 00:30:25.805 "seek_data": false, 00:30:25.805 "copy": true, 00:30:25.805 "nvme_iov_md": false 00:30:25.805 }, 00:30:25.805 "memory_domains": [ 00:30:25.805 { 00:30:25.805 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:25.805 "dma_device_type": 0 00:30:25.805 } 00:30:25.805 ], 00:30:25.805 "driver_specific": { 00:30:25.805 "nvme": [ 00:30:25.805 { 00:30:25.805 "trid": { 00:30:25.805 "trtype": "RDMA", 00:30:25.805 "adrfam": "IPv4", 00:30:25.805 "traddr": "192.168.100.8", 00:30:25.805 "trsvcid": "4420", 00:30:25.805 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:25.805 }, 00:30:25.805 "ctrlr_data": { 00:30:25.805 "cntlid": 1, 00:30:25.805 "vendor_id": "0x8086", 00:30:25.805 "model_number": "SPDK bdev Controller", 00:30:25.805 "serial_number": "00000000000000000000", 00:30:25.805 "firmware_revision": "25.01", 00:30:25.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.805 "oacs": { 00:30:25.805 "security": 0, 00:30:25.805 "format": 0, 00:30:25.805 "firmware": 0, 00:30:25.805 "ns_manage": 0 00:30:25.805 }, 00:30:25.805 "multi_ctrlr": true, 00:30:25.805 "ana_reporting": false 00:30:25.805 }, 00:30:25.805 "vs": { 00:30:25.806 "nvme_version": "1.3" 00:30:25.806 }, 00:30:25.806 "ns_data": { 00:30:25.806 "id": 1, 00:30:25.806 "can_share": true 00:30:25.806 } 00:30:25.806 } 00:30:25.806 ], 00:30:25.806 "mp_policy": "active_passive" 00:30:25.806 } 00:30:25.806 } 00:30:25.806 ] 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.806 [2024-11-06 15:34:53.311618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:25.806 [2024-11-06 15:34:53.347489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:25.806 [2024-11-06 15:34:53.381225] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:25.806 [ 00:30:25.806 { 00:30:25.806 "name": "nvme0n1", 00:30:25.806 "aliases": [ 00:30:25.806 "a10829ba-1e38-41a7-86ee-be0daf8eaccb" 00:30:25.806 ], 00:30:25.806 "product_name": "NVMe disk", 00:30:25.806 "block_size": 512, 00:30:25.806 "num_blocks": 2097152, 00:30:25.806 "uuid": "a10829ba-1e38-41a7-86ee-be0daf8eaccb", 00:30:25.806 "numa_id": 0, 00:30:25.806 "assigned_rate_limits": { 00:30:25.806 "rw_ios_per_sec": 0, 00:30:25.806 "rw_mbytes_per_sec": 0, 00:30:25.806 "r_mbytes_per_sec": 0, 00:30:25.806 "w_mbytes_per_sec": 0 00:30:25.806 }, 00:30:25.806 "claimed": false, 00:30:25.806 "zoned": false, 00:30:25.806 "supported_io_types": { 00:30:25.806 "read": true, 00:30:25.806 "write": true, 00:30:25.806 "unmap": false, 00:30:25.806 "flush": true, 00:30:25.806 "reset": true, 00:30:25.806 "nvme_admin": true, 00:30:25.806 "nvme_io": true, 00:30:25.806 "nvme_io_md": false, 00:30:25.806 "write_zeroes": true, 00:30:25.806 "zcopy": false, 00:30:25.806 "get_zone_info": false, 00:30:25.806 "zone_management": false, 00:30:25.806 "zone_append": false, 00:30:25.806 "compare": true, 00:30:25.806 "compare_and_write": true, 00:30:25.806 "abort": true, 00:30:25.806 "seek_hole": false, 00:30:25.806 "seek_data": false, 00:30:25.806 "copy": true, 00:30:25.806 "nvme_iov_md": false 00:30:25.806 }, 00:30:25.806 "memory_domains": [ 00:30:25.806 { 00:30:25.806 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:25.806 "dma_device_type": 0 00:30:25.806 } 00:30:25.806 ], 00:30:25.806 "driver_specific": { 00:30:25.806 "nvme": [ 00:30:25.806 { 00:30:25.806 "trid": { 00:30:25.806 "trtype": "RDMA", 00:30:25.806 "adrfam": "IPv4", 00:30:25.806 "traddr": "192.168.100.8", 00:30:25.806 "trsvcid": "4420", 00:30:25.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:25.806 }, 00:30:25.806 "ctrlr_data": { 00:30:25.806 "cntlid": 2, 00:30:25.806 "vendor_id": "0x8086", 00:30:25.806 "model_number": "SPDK bdev Controller", 00:30:25.806 "serial_number": "00000000000000000000", 00:30:25.806 "firmware_revision": "25.01", 00:30:25.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.806 "oacs": { 00:30:25.806 "security": 0, 00:30:25.806 "format": 0, 00:30:25.806 "firmware": 0, 00:30:25.806 "ns_manage": 0 00:30:25.806 }, 00:30:25.806 "multi_ctrlr": true, 00:30:25.806 "ana_reporting": false 00:30:25.806 }, 00:30:25.806 "vs": { 00:30:25.806 "nvme_version": "1.3" 00:30:25.806 }, 00:30:25.806 "ns_data": { 00:30:25.806 "id": 1, 00:30:25.806 "can_share": true 00:30:25.806 } 00:30:25.806 } 00:30:25.806 ], 00:30:25.806 "mp_policy": "active_passive" 00:30:25.806 } 00:30:25.806 } 00:30:25.806 ] 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.806 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.065 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.065 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:26.065 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hgZmDCjTZU 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hgZmDCjTZU 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.hgZmDCjTZU 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 [2024-11-06 15:34:53.489255] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 [2024-11-06 15:34:53.509279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.066 nvme0n1 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 [ 00:30:26.066 { 00:30:26.066 "name": "nvme0n1", 00:30:26.066 "aliases": [ 00:30:26.066 "a10829ba-1e38-41a7-86ee-be0daf8eaccb" 00:30:26.066 ], 00:30:26.066 "product_name": "NVMe disk", 00:30:26.066 "block_size": 512, 00:30:26.066 "num_blocks": 2097152, 00:30:26.066 "uuid": "a10829ba-1e38-41a7-86ee-be0daf8eaccb", 00:30:26.066 "numa_id": 0, 00:30:26.066 "assigned_rate_limits": { 00:30:26.066 "rw_ios_per_sec": 0, 00:30:26.066 "rw_mbytes_per_sec": 0, 00:30:26.066 "r_mbytes_per_sec": 0, 00:30:26.066 "w_mbytes_per_sec": 0 00:30:26.066 }, 00:30:26.066 "claimed": false, 00:30:26.066 "zoned": false, 00:30:26.066 "supported_io_types": { 00:30:26.066 "read": true, 00:30:26.066 "write": true, 00:30:26.066 "unmap": false, 00:30:26.066 "flush": true, 00:30:26.066 "reset": true, 00:30:26.066 "nvme_admin": true, 00:30:26.066 "nvme_io": true, 00:30:26.066 "nvme_io_md": false, 00:30:26.066 "write_zeroes": true, 00:30:26.066 "zcopy": false, 00:30:26.066 "get_zone_info": false, 00:30:26.066 "zone_management": false, 00:30:26.066 "zone_append": false, 00:30:26.066 "compare": true, 00:30:26.066 "compare_and_write": true, 00:30:26.066 "abort": true, 00:30:26.066 "seek_hole": false, 00:30:26.066 "seek_data": false, 00:30:26.066 "copy": true, 00:30:26.066 "nvme_iov_md": false 00:30:26.066 }, 00:30:26.066 "memory_domains": [ 00:30:26.066 { 00:30:26.066 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:26.066 "dma_device_type": 0 00:30:26.066 } 00:30:26.066 ], 00:30:26.066 "driver_specific": { 00:30:26.066 "nvme": [ 00:30:26.066 { 00:30:26.066 "trid": { 00:30:26.066 "trtype": "RDMA", 00:30:26.066 "adrfam": "IPv4", 00:30:26.066 "traddr": "192.168.100.8", 00:30:26.066 "trsvcid": "4421", 00:30:26.066 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:26.066 }, 00:30:26.066 "ctrlr_data": { 00:30:26.066 "cntlid": 3, 00:30:26.066 "vendor_id": "0x8086", 00:30:26.066 "model_number": "SPDK bdev Controller", 00:30:26.066 "serial_number": "00000000000000000000", 00:30:26.066 "firmware_revision": "25.01", 00:30:26.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.066 "oacs": { 00:30:26.066 "security": 0, 00:30:26.066 "format": 0, 00:30:26.066 "firmware": 0, 00:30:26.066 "ns_manage": 0 00:30:26.066 }, 00:30:26.066 "multi_ctrlr": true, 00:30:26.066 "ana_reporting": false 00:30:26.066 }, 00:30:26.066 "vs": { 00:30:26.066 "nvme_version": "1.3" 00:30:26.066 }, 00:30:26.066 "ns_data": { 00:30:26.066 "id": 1, 00:30:26.066 "can_share": true 00:30:26.066 } 00:30:26.066 } 00:30:26.066 ], 00:30:26.066 "mp_policy": "active_passive" 00:30:26.066 } 00:30:26.066 } 00:30:26.066 ] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.hgZmDCjTZU 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:26.066 rmmod nvme_rdma 00:30:26.066 rmmod nvme_fabrics 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3215347 ']' 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3215347 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3215347 ']' 00:30:26.066 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3215347 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3215347 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3215347' 00:30:26.324 killing process with pid 3215347 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3215347 00:30:26.324 15:34:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3215347 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:27.261 00:30:27.261 real 0m9.813s 00:30:27.261 user 0m4.746s 00:30:27.261 sys 0m5.889s 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:27.261 ************************************ 00:30:27.261 END TEST nvmf_async_init 00:30:27.261 ************************************ 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.261 ************************************ 00:30:27.261 START TEST dma 00:30:27.261 ************************************ 00:30:27.261 15:34:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:27.522 * Looking for test storage... 00:30:27.522 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:27.522 15:34:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.522 15:34:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.522 15:34:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.522 --rc genhtml_branch_coverage=1 00:30:27.522 --rc genhtml_function_coverage=1 00:30:27.522 --rc genhtml_legend=1 00:30:27.522 --rc geninfo_all_blocks=1 00:30:27.522 --rc geninfo_unexecuted_blocks=1 00:30:27.522 00:30:27.522 ' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.522 --rc genhtml_branch_coverage=1 00:30:27.522 --rc genhtml_function_coverage=1 00:30:27.522 --rc genhtml_legend=1 00:30:27.522 --rc geninfo_all_blocks=1 00:30:27.522 --rc geninfo_unexecuted_blocks=1 00:30:27.522 00:30:27.522 ' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.522 --rc genhtml_branch_coverage=1 00:30:27.522 --rc genhtml_function_coverage=1 00:30:27.522 --rc genhtml_legend=1 00:30:27.522 --rc geninfo_all_blocks=1 00:30:27.522 --rc geninfo_unexecuted_blocks=1 00:30:27.522 00:30:27.522 ' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.522 --rc genhtml_branch_coverage=1 00:30:27.522 --rc genhtml_function_coverage=1 00:30:27.522 --rc genhtml_legend=1 00:30:27.522 --rc geninfo_all_blocks=1 00:30:27.522 --rc geninfo_unexecuted_blocks=1 00:30:27.522 00:30:27.522 ' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.522 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.522 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.523 15:34:55 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:35.649 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:35.649 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:35.649 Found net devices under 0000:18:00.0: mlx_0_0 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:35.649 Found net devices under 0000:18:00.1: mlx_0_1 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:35.649 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:35.650 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:35.650 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:30:35.650 altname enp24s0f0np0 00:30:35.650 altname ens785f0np0 00:30:35.650 inet 192.168.100.8/24 scope global mlx_0_0 00:30:35.650 valid_lft forever preferred_lft forever 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:35.650 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:35.650 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:30:35.650 altname enp24s0f1np1 00:30:35.650 altname ens785f1np1 00:30:35.650 inet 192.168.100.9/24 scope global mlx_0_1 00:30:35.650 valid_lft forever preferred_lft forever 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:35.650 192.168.100.9' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:35.650 192.168.100.9' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:35.650 192.168.100.9' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:35.650 15:35:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3218639 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3218639 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@833 -- # '[' -z 3218639 ']' 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.650 [2024-11-06 15:35:02.112409] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:35.650 [2024-11-06 15:35:02.112518] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.650 [2024-11-06 15:35:02.263503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.650 [2024-11-06 15:35:02.368585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.650 [2024-11-06 15:35:02.368642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.650 [2024-11-06 15:35:02.368655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.650 [2024-11-06 15:35:02.368668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.650 [2024-11-06 15:35:02.368677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.650 [2024-11-06 15:35:02.370637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.650 [2024-11-06 15:35:02.370664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@866 -- # return 0 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.650 15:35:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.650 [2024-11-06 15:35:02.996858] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f110697d940) succeed. 00:30:35.650 [2024-11-06 15:35:03.006356] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f1106939940) succeed. 00:30:35.650 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.650 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:30:35.650 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.650 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.910 Malloc0 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:35.910 [2024-11-06 15:35:03.440448] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.910 { 00:30:35.910 "params": { 00:30:35.910 "name": "Nvme$subsystem", 00:30:35.910 "trtype": "$TEST_TRANSPORT", 00:30:35.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.910 "adrfam": "ipv4", 00:30:35.910 "trsvcid": "$NVMF_PORT", 00:30:35.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.910 "hdgst": ${hdgst:-false}, 00:30:35.910 "ddgst": ${ddgst:-false} 00:30:35.910 }, 00:30:35.910 "method": "bdev_nvme_attach_controller" 00:30:35.910 } 00:30:35.910 EOF 00:30:35.910 )") 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:30:35.910 15:35:03 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.910 "params": { 00:30:35.910 "name": "Nvme0", 00:30:35.910 "trtype": "rdma", 00:30:35.910 "traddr": "192.168.100.8", 00:30:35.910 "adrfam": "ipv4", 00:30:35.910 "trsvcid": "4420", 00:30:35.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:35.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:35.910 "hdgst": false, 00:30:35.910 "ddgst": false 00:30:35.910 }, 00:30:35.910 "method": "bdev_nvme_attach_controller" 00:30:35.910 }' 00:30:35.910 [2024-11-06 15:35:03.524250] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:35.910 [2024-11-06 15:35:03.524364] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218839 ] 00:30:36.169 [2024-11-06 15:35:03.672347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:36.169 [2024-11-06 15:35:03.786533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.169 [2024-11-06 15:35:03.786560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.742 bdev Nvme0n1 reports 1 memory domains 00:30:42.742 bdev Nvme0n1 supports RDMA memory domain 00:30:42.742 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:42.742 ========================================================================== 00:30:42.742 Latency [us] 00:30:42.742 IOPS MiB/s Average min max 00:30:42.742 Core 2: 18988.16 74.17 841.85 294.77 13331.34 00:30:42.742 Core 3: 18875.58 73.73 846.86 297.66 4511.01 00:30:42.742 ========================================================================== 00:30:42.742 Total : 37863.74 147.91 844.35 294.77 13331.34 00:30:42.742 00:30:42.742 Total operations: 189363, translate 189363 pull_push 0 memzero 0 00:30:42.742 15:35:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:30:42.742 15:35:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:30:42.742 15:35:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:30:42.742 [2024-11-06 15:35:10.220160] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:42.742 [2024-11-06 15:35:10.220266] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219656 ] 00:30:42.742 [2024-11-06 15:35:10.369513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:43.002 [2024-11-06 15:35:10.492358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.002 [2024-11-06 15:35:10.492377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.573 bdev Malloc0 reports 2 memory domains 00:30:49.573 bdev Malloc0 doesn't support RDMA memory domain 00:30:49.573 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:49.573 ========================================================================== 00:30:49.573 Latency [us] 00:30:49.573 IOPS MiB/s Average min max 00:30:49.573 Core 2: 12000.64 46.88 1332.36 460.65 1779.35 00:30:49.573 Core 3: 12249.52 47.85 1305.27 500.39 1630.04 00:30:49.573 ========================================================================== 00:30:49.573 Total : 24250.16 94.73 1318.68 460.65 1779.35 00:30:49.573 00:30:49.573 Total operations: 121305, translate 0 pull_push 485220 memzero 0 00:30:49.573 15:35:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:30:49.573 15:35:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:30:49.573 15:35:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:49.573 15:35:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:49.832 Ignoring -M option 00:30:49.832 [2024-11-06 15:35:17.282278] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:49.832 [2024-11-06 15:35:17.282377] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220532 ] 00:30:49.832 [2024-11-06 15:35:17.429472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:50.091 [2024-11-06 15:35:17.542187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.091 [2024-11-06 15:35:17.542211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.662 bdev 4fa8dbca-1ed7-48a6-bc98-48253bfa40a3 reports 1 memory domains 00:30:56.662 bdev 4fa8dbca-1ed7-48a6-bc98-48253bfa40a3 supports RDMA memory domain 00:30:56.662 Initialization complete, running randread IO for 5 sec on 2 cores 00:30:56.662 ========================================================================== 00:30:56.662 Latency [us] 00:30:56.662 IOPS MiB/s Average min max 00:30:56.662 Core 2: 61681.62 240.94 258.45 79.77 4233.04 00:30:56.662 Core 3: 63234.34 247.01 251.97 81.01 2021.34 00:30:56.662 ========================================================================== 00:30:56.662 Total : 124915.97 487.95 255.17 79.77 4233.04 00:30:56.662 00:30:56.662 Total operations: 624693, translate 0 pull_push 0 memzero 624693 00:30:56.662 15:35:23 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:30:56.662 [2024-11-06 15:35:24.132239] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:59.199 Initializing NVMe Controllers 00:30:59.199 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:30:59.199 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:59.199 Initialization complete. Launching workers. 00:30:59.199 ======================================================== 00:30:59.199 Latency(us) 00:30:59.200 Device Information : IOPS MiB/s Average min max 00:30:59.200 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7971.72 5984.55 8980.59 00:30:59.200 ======================================================== 00:30:59.200 Total : 2016.00 7.88 7971.72 5984.55 8980.59 00:30:59.200 00:30:59.200 15:35:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:30:59.200 15:35:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:30:59.200 15:35:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:59.200 15:35:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:59.200 [2024-11-06 15:35:26.608488] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:59.200 [2024-11-06 15:35:26.608594] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221744 ] 00:30:59.200 [2024-11-06 15:35:26.754288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.459 [2024-11-06 15:35:26.866484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.459 [2024-11-06 15:35:26.866509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.128 bdev bfe72862-6b5b-4c5d-a8e4-03ebf4f0a75a reports 1 memory domains 00:31:06.128 bdev bfe72862-6b5b-4c5d-a8e4-03ebf4f0a75a supports RDMA memory domain 00:31:06.128 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:06.128 ========================================================================== 00:31:06.128 Latency [us] 00:31:06.128 IOPS MiB/s Average min max 00:31:06.128 Core 2: 16534.71 64.59 966.92 16.70 6847.89 00:31:06.128 Core 3: 16782.04 65.55 952.65 14.95 6245.77 00:31:06.128 ========================================================================== 00:31:06.128 Total : 33316.75 130.14 959.73 14.95 6847.89 00:31:06.128 00:31:06.128 Total operations: 166625, translate 166488 pull_push 0 memzero 137 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:06.128 rmmod nvme_rdma 00:31:06.128 rmmod nvme_fabrics 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3218639 ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3218639 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@952 -- # '[' -z 3218639 ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # kill -0 3218639 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # uname 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3218639 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3218639' 00:31:06.128 killing process with pid 3218639 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@971 -- # kill 3218639 00:31:06.128 15:35:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@976 -- # wait 3218639 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:08.036 00:31:08.036 real 0m40.630s 00:31:08.036 user 1m58.210s 00:31:08.036 sys 0m7.420s 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.036 ************************************ 00:31:08.036 END TEST dma 00:31:08.036 ************************************ 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.036 ************************************ 00:31:08.036 START TEST nvmf_identify 00:31:08.036 ************************************ 00:31:08.036 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:08.036 * Looking for test storage... 00:31:08.296 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.296 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.296 --rc genhtml_branch_coverage=1 00:31:08.296 --rc genhtml_function_coverage=1 00:31:08.296 --rc genhtml_legend=1 00:31:08.296 --rc geninfo_all_blocks=1 00:31:08.297 --rc geninfo_unexecuted_blocks=1 00:31:08.297 00:31:08.297 ' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.297 --rc genhtml_branch_coverage=1 00:31:08.297 --rc genhtml_function_coverage=1 00:31:08.297 --rc genhtml_legend=1 00:31:08.297 --rc geninfo_all_blocks=1 00:31:08.297 --rc geninfo_unexecuted_blocks=1 00:31:08.297 00:31:08.297 ' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.297 --rc genhtml_branch_coverage=1 00:31:08.297 --rc genhtml_function_coverage=1 00:31:08.297 --rc genhtml_legend=1 00:31:08.297 --rc geninfo_all_blocks=1 00:31:08.297 --rc geninfo_unexecuted_blocks=1 00:31:08.297 00:31:08.297 ' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.297 --rc genhtml_branch_coverage=1 00:31:08.297 --rc genhtml_function_coverage=1 00:31:08.297 --rc genhtml_legend=1 00:31:08.297 --rc geninfo_all_blocks=1 00:31:08.297 --rc geninfo_unexecuted_blocks=1 00:31:08.297 00:31:08.297 ' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.297 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.297 15:35:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:14.871 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:15.131 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:15.131 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:15.131 Found net devices under 0000:18:00.0: mlx_0_0 00:31:15.131 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:15.132 Found net devices under 0000:18:00.1: mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:15.132 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:15.132 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:31:15.132 altname enp24s0f0np0 00:31:15.132 altname ens785f0np0 00:31:15.132 inet 192.168.100.8/24 scope global mlx_0_0 00:31:15.132 valid_lft forever preferred_lft forever 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:15.132 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:15.132 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:31:15.132 altname enp24s0f1np1 00:31:15.132 altname ens785f1np1 00:31:15.132 inet 192.168.100.9/24 scope global mlx_0_1 00:31:15.132 valid_lft forever preferred_lft forever 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:15.132 192.168.100.9' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:15.132 192.168.100.9' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:15.132 192.168.100.9' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:15.132 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:15.133 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.133 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3225740 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3225740 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3225740 ']' 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:15.391 15:35:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:15.391 [2024-11-06 15:35:42.865919] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:15.391 [2024-11-06 15:35:42.866037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.391 [2024-11-06 15:35:43.015484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.649 [2024-11-06 15:35:43.133755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.649 [2024-11-06 15:35:43.133801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.649 [2024-11-06 15:35:43.133814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.649 [2024-11-06 15:35:43.133827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.649 [2024-11-06 15:35:43.133838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.649 [2024-11-06 15:35:43.135950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.649 [2024-11-06 15:35:43.136023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.649 [2024-11-06 15:35:43.136050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.649 [2024-11-06 15:35:43.136068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.217 15:35:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.217 15:35:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:31:16.217 15:35:43 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:16.217 15:35:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.217 15:35:43 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.217 [2024-11-06 15:35:43.706831] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f8f51d61940) succeed. 00:31:16.217 [2024-11-06 15:35:43.716496] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f8f51d1d940) succeed. 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.476 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 Malloc0 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 [2024-11-06 15:35:44.158319] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.739 [ 00:31:16.739 { 00:31:16.739 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:16.739 "subtype": "Discovery", 00:31:16.739 "listen_addresses": [ 00:31:16.739 { 00:31:16.739 "trtype": "RDMA", 00:31:16.739 "adrfam": "IPv4", 00:31:16.739 "traddr": "192.168.100.8", 00:31:16.739 "trsvcid": "4420" 00:31:16.739 } 00:31:16.739 ], 00:31:16.739 "allow_any_host": true, 00:31:16.739 "hosts": [] 00:31:16.739 }, 00:31:16.739 { 00:31:16.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.739 "subtype": "NVMe", 00:31:16.739 "listen_addresses": [ 00:31:16.739 { 00:31:16.739 "trtype": "RDMA", 00:31:16.739 "adrfam": "IPv4", 00:31:16.739 "traddr": "192.168.100.8", 00:31:16.739 "trsvcid": "4420" 00:31:16.739 } 00:31:16.739 ], 00:31:16.739 "allow_any_host": true, 00:31:16.739 "hosts": [], 00:31:16.739 "serial_number": "SPDK00000000000001", 00:31:16.739 "model_number": "SPDK bdev Controller", 00:31:16.739 "max_namespaces": 32, 00:31:16.739 "min_cntlid": 1, 00:31:16.739 "max_cntlid": 65519, 00:31:16.739 "namespaces": [ 00:31:16.739 { 00:31:16.739 "nsid": 1, 00:31:16.739 "bdev_name": "Malloc0", 00:31:16.739 "name": "Malloc0", 00:31:16.739 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:16.739 "eui64": "ABCDEF0123456789", 00:31:16.739 "uuid": "5338d6f7-c3bf-4d09-98ca-81a5831218ed" 00:31:16.739 } 00:31:16.739 ] 00:31:16.739 } 00:31:16.739 ] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.739 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:16.739 [2024-11-06 15:35:44.243203] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:16.740 [2024-11-06 15:35:44.243283] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225948 ] 00:31:16.740 [2024-11-06 15:35:44.328332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:16.740 [2024-11-06 15:35:44.328452] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:16.740 [2024-11-06 15:35:44.328485] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:16.740 [2024-11-06 15:35:44.328494] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:16.740 [2024-11-06 15:35:44.328549] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:16.740 [2024-11-06 15:35:44.339553] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:16.740 [2024-11-06 15:35:44.354617] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:16.740 [2024-11-06 15:35:44.354642] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:16.740 [2024-11-06 15:35:44.354663] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354676] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354690] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354701] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354712] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354721] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354731] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354740] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354751] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354760] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354770] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354778] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354791] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354800] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354810] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354819] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354829] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354837] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354850] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354862] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354872] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354885] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354895] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354908] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354925] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354934] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354944] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354953] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354964] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354973] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354983] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.354991] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:16.740 [2024-11-06 15:35:44.355002] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:16.740 [2024-11-06 15:35:44.355010] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:16.740 [2024-11-06 15:35:44.355049] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.355074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180800 00:31:16.740 [2024-11-06 15:35:44.360143] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360191] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:16.740 [2024-11-06 15:35:44.360227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:16.740 [2024-11-06 15:35:44.360238] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:16.740 [2024-11-06 15:35:44.360262] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.740 [2024-11-06 15:35:44.360324] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:16.740 [2024-11-06 15:35:44.360362] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:16.740 [2024-11-06 15:35:44.360392] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.740 [2024-11-06 15:35:44.360421] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:16.740 [2024-11-06 15:35:44.360459] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:16.740 [2024-11-06 15:35:44.360484] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.740 [2024-11-06 15:35:44.360524] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:16.740 [2024-11-06 15:35:44.360559] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.740 [2024-11-06 15:35:44.360610] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360634] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:16.740 [2024-11-06 15:35:44.360644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:16.740 [2024-11-06 15:35:44.360655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:16.740 [2024-11-06 15:35:44.360778] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:16.740 [2024-11-06 15:35:44.360787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:16.740 [2024-11-06 15:35:44.360805] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.740 [2024-11-06 15:35:44.360819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.740 [2024-11-06 15:35:44.360853] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.740 [2024-11-06 15:35:44.360861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:16.740 [2024-11-06 15:35:44.360876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:16.741 [2024-11-06 15:35:44.360885] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.360900] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.360914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.741 [2024-11-06 15:35:44.360936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.360945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.360960] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:16.741 [2024-11-06 15:35:44.360969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.360982] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.360992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:16.741 [2024-11-06 15:35:44.361012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.361034] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:31:16.741 [2024-11-06 15:35:44.361110] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361145] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:16.741 [2024-11-06 15:35:44.361158] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:16.741 [2024-11-06 15:35:44.361167] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:16.741 [2024-11-06 15:35:44.361181] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:16.741 [2024-11-06 15:35:44.361190] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:16.741 [2024-11-06 15:35:44.361207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.361216] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.361246] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.741 [2024-11-06 15:35:44.361285] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361309] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.741 [2024-11-06 15:35:44.361336] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.741 [2024-11-06 15:35:44.361363] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.741 [2024-11-06 15:35:44.361386] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.741 [2024-11-06 15:35:44.361407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.361418] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:16.741 [2024-11-06 15:35:44.361449] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.741 [2024-11-06 15:35:44.361501] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361524] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:16.741 [2024-11-06 15:35:44.361535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:16.741 [2024-11-06 15:35:44.361549] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361566] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:31:16.741 [2024-11-06 15:35:44.361623] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361653] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361672] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:16.741 [2024-11-06 15:35:44.361722] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x180800 00:31:16.741 [2024-11-06 15:35:44.361759] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.741 [2024-11-06 15:35:44.361816] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361858] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180800 00:31:16.741 [2024-11-06 15:35:44.361882] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361894] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361914] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361924] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.361937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.361953] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.361968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180800 00:31:16.741 [2024-11-06 15:35:44.361977] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180800 00:31:16.741 [2024-11-06 15:35:44.362004] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.741 [2024-11-06 15:35:44.362012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:16.741 [2024-11-06 15:35:44.362031] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180800 00:31:16.741 ===================================================== 00:31:16.741 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:16.741 ===================================================== 00:31:16.741 Controller Capabilities/Features 00:31:16.741 ================================ 00:31:16.741 Vendor ID: 0000 00:31:16.741 Subsystem Vendor ID: 0000 00:31:16.741 Serial Number: .................... 00:31:16.741 Model Number: ........................................ 00:31:16.741 Firmware Version: 25.01 00:31:16.741 Recommended Arb Burst: 0 00:31:16.741 IEEE OUI Identifier: 00 00 00 00:31:16.741 Multi-path I/O 00:31:16.741 May have multiple subsystem ports: No 00:31:16.741 May have multiple controllers: No 00:31:16.741 Associated with SR-IOV VF: No 00:31:16.741 Max Data Transfer Size: 131072 00:31:16.742 Max Number of Namespaces: 0 00:31:16.742 Max Number of I/O Queues: 1024 00:31:16.742 NVMe Specification Version (VS): 1.3 00:31:16.742 NVMe Specification Version (Identify): 1.3 00:31:16.742 Maximum Queue Entries: 128 00:31:16.742 Contiguous Queues Required: Yes 00:31:16.742 Arbitration Mechanisms Supported 00:31:16.742 Weighted Round Robin: Not Supported 00:31:16.742 Vendor Specific: Not Supported 00:31:16.742 Reset Timeout: 15000 ms 00:31:16.742 Doorbell Stride: 4 bytes 00:31:16.742 NVM Subsystem Reset: Not Supported 00:31:16.742 Command Sets Supported 00:31:16.742 NVM Command Set: Supported 00:31:16.742 Boot Partition: Not Supported 00:31:16.742 Memory Page Size Minimum: 4096 bytes 00:31:16.742 Memory Page Size Maximum: 4096 bytes 00:31:16.742 Persistent Memory Region: Not Supported 00:31:16.742 Optional Asynchronous Events Supported 00:31:16.742 Namespace Attribute Notices: Not Supported 00:31:16.742 Firmware Activation Notices: Not Supported 00:31:16.742 ANA Change Notices: Not Supported 00:31:16.742 PLE Aggregate Log Change Notices: Not Supported 00:31:16.742 LBA Status Info Alert Notices: Not Supported 00:31:16.742 EGE Aggregate Log Change Notices: Not Supported 00:31:16.742 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.742 Zone Descriptor Change Notices: Not Supported 00:31:16.742 Discovery Log Change Notices: Supported 00:31:16.742 Controller Attributes 00:31:16.742 128-bit Host Identifier: Not Supported 00:31:16.742 Non-Operational Permissive Mode: Not Supported 00:31:16.742 NVM Sets: Not Supported 00:31:16.742 Read Recovery Levels: Not Supported 00:31:16.742 Endurance Groups: Not Supported 00:31:16.742 Predictable Latency Mode: Not Supported 00:31:16.742 Traffic Based Keep ALive: Not Supported 00:31:16.742 Namespace Granularity: Not Supported 00:31:16.742 SQ Associations: Not Supported 00:31:16.742 UUID List: Not Supported 00:31:16.742 Multi-Domain Subsystem: Not Supported 00:31:16.742 Fixed Capacity Management: Not Supported 00:31:16.742 Variable Capacity Management: Not Supported 00:31:16.742 Delete Endurance Group: Not Supported 00:31:16.742 Delete NVM Set: Not Supported 00:31:16.742 Extended LBA Formats Supported: Not Supported 00:31:16.742 Flexible Data Placement Supported: Not Supported 00:31:16.742 00:31:16.742 Controller Memory Buffer Support 00:31:16.742 ================================ 00:31:16.742 Supported: No 00:31:16.742 00:31:16.742 Persistent Memory Region Support 00:31:16.742 ================================ 00:31:16.742 Supported: No 00:31:16.742 00:31:16.742 Admin Command Set Attributes 00:31:16.742 ============================ 00:31:16.742 Security Send/Receive: Not Supported 00:31:16.742 Format NVM: Not Supported 00:31:16.742 Firmware Activate/Download: Not Supported 00:31:16.742 Namespace Management: Not Supported 00:31:16.742 Device Self-Test: Not Supported 00:31:16.742 Directives: Not Supported 00:31:16.742 NVMe-MI: Not Supported 00:31:16.742 Virtualization Management: Not Supported 00:31:16.742 Doorbell Buffer Config: Not Supported 00:31:16.742 Get LBA Status Capability: Not Supported 00:31:16.742 Command & Feature Lockdown Capability: Not Supported 00:31:16.742 Abort Command Limit: 1 00:31:16.742 Async Event Request Limit: 4 00:31:16.742 Number of Firmware Slots: N/A 00:31:16.742 Firmware Slot 1 Read-Only: N/A 00:31:16.742 Firmware Activation Without Reset: N/A 00:31:16.742 Multiple Update Detection Support: N/A 00:31:16.742 Firmware Update Granularity: No Information Provided 00:31:16.742 Per-Namespace SMART Log: No 00:31:16.742 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.742 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:16.742 Command Effects Log Page: Not Supported 00:31:16.742 Get Log Page Extended Data: Supported 00:31:16.742 Telemetry Log Pages: Not Supported 00:31:16.742 Persistent Event Log Pages: Not Supported 00:31:16.742 Supported Log Pages Log Page: May Support 00:31:16.742 Commands Supported & Effects Log Page: Not Supported 00:31:16.742 Feature Identifiers & Effects Log Page:May Support 00:31:16.742 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.742 Data Area 4 for Telemetry Log: Not Supported 00:31:16.742 Error Log Page Entries Supported: 128 00:31:16.742 Keep Alive: Not Supported 00:31:16.742 00:31:16.742 NVM Command Set Attributes 00:31:16.742 ========================== 00:31:16.742 Submission Queue Entry Size 00:31:16.742 Max: 1 00:31:16.742 Min: 1 00:31:16.742 Completion Queue Entry Size 00:31:16.742 Max: 1 00:31:16.742 Min: 1 00:31:16.742 Number of Namespaces: 0 00:31:16.742 Compare Command: Not Supported 00:31:16.742 Write Uncorrectable Command: Not Supported 00:31:16.742 Dataset Management Command: Not Supported 00:31:16.742 Write Zeroes Command: Not Supported 00:31:16.742 Set Features Save Field: Not Supported 00:31:16.742 Reservations: Not Supported 00:31:16.742 Timestamp: Not Supported 00:31:16.742 Copy: Not Supported 00:31:16.742 Volatile Write Cache: Not Present 00:31:16.742 Atomic Write Unit (Normal): 1 00:31:16.742 Atomic Write Unit (PFail): 1 00:31:16.742 Atomic Compare & Write Unit: 1 00:31:16.742 Fused Compare & Write: Supported 00:31:16.742 Scatter-Gather List 00:31:16.742 SGL Command Set: Supported 00:31:16.742 SGL Keyed: Supported 00:31:16.742 SGL Bit Bucket Descriptor: Not Supported 00:31:16.742 SGL Metadata Pointer: Not Supported 00:31:16.742 Oversized SGL: Not Supported 00:31:16.742 SGL Metadata Address: Not Supported 00:31:16.742 SGL Offset: Supported 00:31:16.742 Transport SGL Data Block: Not Supported 00:31:16.742 Replay Protected Memory Block: Not Supported 00:31:16.742 00:31:16.742 Firmware Slot Information 00:31:16.742 ========================= 00:31:16.742 Active slot: 0 00:31:16.742 00:31:16.742 00:31:16.742 Error Log 00:31:16.742 ========= 00:31:16.742 00:31:16.742 Active Namespaces 00:31:16.742 ================= 00:31:16.742 Discovery Log Page 00:31:16.742 ================== 00:31:16.742 Generation Counter: 2 00:31:16.742 Number of Records: 2 00:31:16.742 Record Format: 0 00:31:16.742 00:31:16.742 Discovery Log Entry 0 00:31:16.742 ---------------------- 00:31:16.742 Transport Type: 1 (RDMA) 00:31:16.742 Address Family: 1 (IPv4) 00:31:16.742 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:16.742 Entry Flags: 00:31:16.742 Duplicate Returned Information: 1 00:31:16.742 Explicit Persistent Connection Support for Discovery: 1 00:31:16.742 Transport Requirements: 00:31:16.742 Secure Channel: Not Required 00:31:16.742 Port ID: 0 (0x0000) 00:31:16.742 Controller ID: 65535 (0xffff) 00:31:16.742 Admin Max SQ Size: 128 00:31:16.742 Transport Service Identifier: 4420 00:31:16.742 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:16.742 Transport Address: 192.168.100.8 00:31:16.742 Transport Specific Address Subtype - RDMA 00:31:16.742 RDMA QP Service Type: 1 (Reliable Connected) 00:31:16.742 RDMA Provider Type: 1 (No provider specified) 00:31:16.742 RDMA CM Service: 1 (RDMA_CM) 00:31:16.742 Discovery Log Entry 1 00:31:16.742 ---------------------- 00:31:16.742 Transport Type: 1 (RDMA) 00:31:16.742 Address Family: 1 (IPv4) 00:31:16.742 Subsystem Type: 2 (NVM Subsystem) 00:31:16.742 Entry Flags: 00:31:16.742 Duplicate Returned Information: 0 00:31:16.742 Explicit Persistent Connection Support for Discovery: 0 00:31:16.743 Transport Requirements: 00:31:16.743 Secure Channel: Not Required 00:31:16.743 Port ID: 0 (0x0000) 00:31:16.743 Controller ID: 65535 (0xffff) 00:31:16.743 Admin Max SQ Size: [2024-11-06 15:35:44.362162] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:16.743 [2024-11-06 15:35:44.362183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362243] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362284] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362315] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362344] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362361] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362389] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:16.743 [2024-11-06 15:35:44.362406] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:16.743 [2024-11-06 15:35:44.362416] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362433] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362467] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362488] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362501] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362535] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362557] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362574] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362610] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362633] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362648] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362687] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362707] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362722] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362761] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362784] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362797] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362827] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362850] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362867] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362902] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362922] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362934] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.362964] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.362976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.362985] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.362999] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.363042] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.363051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.363062] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363074] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.363108] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.363119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.363135] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363150] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.363188] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.363197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.363209] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363224] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.363257] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.363268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.363277] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363293] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.743 [2024-11-06 15:35:44.363305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.743 [2024-11-06 15:35:44.363332] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.743 [2024-11-06 15:35:44.363342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:16.743 [2024-11-06 15:35:44.363354] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363366] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363404] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363431] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363446] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363481] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363502] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363514] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363550] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363573] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363587] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363634] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363667] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363705] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363725] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363740] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363775] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363799] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363814] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363848] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363868] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363883] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363921] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.363941] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363953] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.363966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.363985] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.363996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.364008] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.364025] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.364036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.364064] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.364072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.364084] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.364105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.364118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.368149] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.368168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.368178] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.368197] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.368210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:16.744 [2024-11-06 15:35:44.368243] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:16.744 [2024-11-06 15:35:44.368252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:31:16.744 [2024-11-06 15:35:44.368266] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180800 00:31:16.744 [2024-11-06 15:35:44.368276] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:31:17.004 128 00:31:17.004 Transport Service Identifier: 4420 00:31:17.004 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:17.004 Transport Address: 192.168.100.8 00:31:17.004 Transport Specific Address Subtype - RDMA 00:31:17.004 RDMA QP Service Type: 1 (Reliable Connected) 00:31:17.004 RDMA Provider Type: 1 (No provider specified) 00:31:17.004 RDMA CM Service: 1 (RDMA_CM) 00:31:17.004 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:17.004 [2024-11-06 15:35:44.545417] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:17.004 [2024-11-06 15:35:44.545503] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226012 ] 00:31:17.004 [2024-11-06 15:35:44.634257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:17.004 [2024-11-06 15:35:44.634370] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:17.004 [2024-11-06 15:35:44.634407] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:17.004 [2024-11-06 15:35:44.634416] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:17.004 [2024-11-06 15:35:44.634471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:17.266 [2024-11-06 15:35:44.645519] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:17.266 [2024-11-06 15:35:44.656216] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:17.266 [2024-11-06 15:35:44.656238] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:17.266 [2024-11-06 15:35:44.656260] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656276] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656288] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656299] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656309] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656318] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656328] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656336] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656346] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656355] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656368] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656376] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656386] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656395] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656405] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656414] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656424] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656432] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656444] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656456] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656466] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656475] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656485] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656493] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656510] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656519] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656529] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656537] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656547] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656557] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656569] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656577] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:17.266 [2024-11-06 15:35:44.656588] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:17.266 [2024-11-06 15:35:44.656597] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:17.266 [2024-11-06 15:35:44.656635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.656658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x180800 00:31:17.266 [2024-11-06 15:35:44.661141] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.266 [2024-11-06 15:35:44.661164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:17.266 [2024-11-06 15:35:44.661182] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.661195] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:17.266 [2024-11-06 15:35:44.661212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:17.266 [2024-11-06 15:35:44.661223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:17.266 [2024-11-06 15:35:44.661245] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.661260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.266 [2024-11-06 15:35:44.661297] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.266 [2024-11-06 15:35:44.661308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:17.266 [2024-11-06 15:35:44.661323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:17.266 [2024-11-06 15:35:44.661335] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.661349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:17.266 [2024-11-06 15:35:44.661362] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.266 [2024-11-06 15:35:44.661380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.266 [2024-11-06 15:35:44.661404] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.266 [2024-11-06 15:35:44.661416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:17.266 [2024-11-06 15:35:44.661425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:17.267 [2024-11-06 15:35:44.661437] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661460] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.661495] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.661504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.661516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661528] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661548] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.661594] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.661603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.661618] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:17.267 [2024-11-06 15:35:44.661628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661639] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661762] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:17.267 [2024-11-06 15:35:44.661771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661787] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.661827] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.661836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.661850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:17.267 [2024-11-06 15:35:44.661860] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661874] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.661907] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.661915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.661928] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:17.267 [2024-11-06 15:35:44.661937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.661950] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.661960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:17.267 [2024-11-06 15:35:44.661974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.661996] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:31:17.267 [2024-11-06 15:35:44.662083] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.662114] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:17.267 [2024-11-06 15:35:44.662138] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:17.267 [2024-11-06 15:35:44.662147] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:17.267 [2024-11-06 15:35:44.662158] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:17.267 [2024-11-06 15:35:44.662167] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:17.267 [2024-11-06 15:35:44.662180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662189] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662219] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.662258] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.662269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.662282] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.267 [2024-11-06 15:35:44.662312] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.267 [2024-11-06 15:35:44.662334] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.267 [2024-11-06 15:35:44.662355] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.267 [2024-11-06 15:35:44.662375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662387] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662424] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.662457] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.662467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.662481] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:17.267 [2024-11-06 15:35:44.662490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662501] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662536] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.267 [2024-11-06 15:35:44.662583] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.662594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.662673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662685] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662725] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.267 [2024-11-06 15:35:44.662739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180800 00:31:17.267 [2024-11-06 15:35:44.662781] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.267 [2024-11-06 15:35:44.662789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:17.267 [2024-11-06 15:35:44.662822] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:17.267 [2024-11-06 15:35:44.662840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:17.267 [2024-11-06 15:35:44.662852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.662864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.662881] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.662895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.662955] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.662964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.662987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.662999] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663032] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.663077] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663116] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663207] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:17.268 [2024-11-06 15:35:44.663216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:17.268 [2024-11-06 15:35:44.663231] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:17.268 [2024-11-06 15:35:44.663266] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.268 [2024-11-06 15:35:44.663293] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.268 [2024-11-06 15:35:44.663322] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663343] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663356] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663375] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663388] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.268 [2024-11-06 15:35:44.663424] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663444] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663458] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.268 [2024-11-06 15:35:44.663497] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663520] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663532] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.268 [2024-11-06 15:35:44.663568] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663588] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663612] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.663642] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.663670] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.663703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x180800 00:31:17.268 [2024-11-06 15:35:44.663732] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663769] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663781] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663806] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663818] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663838] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x180800 00:31:17.268 [2024-11-06 15:35:44.663847] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.268 [2024-11-06 15:35:44.663856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:17.268 [2024-11-06 15:35:44.663873] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x180800 00:31:17.268 ===================================================== 00:31:17.268 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.268 ===================================================== 00:31:17.268 Controller Capabilities/Features 00:31:17.268 ================================ 00:31:17.268 Vendor ID: 8086 00:31:17.268 Subsystem Vendor ID: 8086 00:31:17.268 Serial Number: SPDK00000000000001 00:31:17.268 Model Number: SPDK bdev Controller 00:31:17.268 Firmware Version: 25.01 00:31:17.268 Recommended Arb Burst: 6 00:31:17.268 IEEE OUI Identifier: e4 d2 5c 00:31:17.268 Multi-path I/O 00:31:17.268 May have multiple subsystem ports: Yes 00:31:17.268 May have multiple controllers: Yes 00:31:17.268 Associated with SR-IOV VF: No 00:31:17.268 Max Data Transfer Size: 131072 00:31:17.268 Max Number of Namespaces: 32 00:31:17.268 Max Number of I/O Queues: 127 00:31:17.268 NVMe Specification Version (VS): 1.3 00:31:17.268 NVMe Specification Version (Identify): 1.3 00:31:17.268 Maximum Queue Entries: 128 00:31:17.268 Contiguous Queues Required: Yes 00:31:17.268 Arbitration Mechanisms Supported 00:31:17.268 Weighted Round Robin: Not Supported 00:31:17.268 Vendor Specific: Not Supported 00:31:17.268 Reset Timeout: 15000 ms 00:31:17.269 Doorbell Stride: 4 bytes 00:31:17.269 NVM Subsystem Reset: Not Supported 00:31:17.269 Command Sets Supported 00:31:17.269 NVM Command Set: Supported 00:31:17.269 Boot Partition: Not Supported 00:31:17.269 Memory Page Size Minimum: 4096 bytes 00:31:17.269 Memory Page Size Maximum: 4096 bytes 00:31:17.269 Persistent Memory Region: Not Supported 00:31:17.269 Optional Asynchronous Events Supported 00:31:17.269 Namespace Attribute Notices: Supported 00:31:17.269 Firmware Activation Notices: Not Supported 00:31:17.269 ANA Change Notices: Not Supported 00:31:17.269 PLE Aggregate Log Change Notices: Not Supported 00:31:17.269 LBA Status Info Alert Notices: Not Supported 00:31:17.269 EGE Aggregate Log Change Notices: Not Supported 00:31:17.269 Normal NVM Subsystem Shutdown event: Not Supported 00:31:17.269 Zone Descriptor Change Notices: Not Supported 00:31:17.269 Discovery Log Change Notices: Not Supported 00:31:17.269 Controller Attributes 00:31:17.269 128-bit Host Identifier: Supported 00:31:17.269 Non-Operational Permissive Mode: Not Supported 00:31:17.269 NVM Sets: Not Supported 00:31:17.269 Read Recovery Levels: Not Supported 00:31:17.269 Endurance Groups: Not Supported 00:31:17.269 Predictable Latency Mode: Not Supported 00:31:17.269 Traffic Based Keep ALive: Not Supported 00:31:17.269 Namespace Granularity: Not Supported 00:31:17.269 SQ Associations: Not Supported 00:31:17.269 UUID List: Not Supported 00:31:17.269 Multi-Domain Subsystem: Not Supported 00:31:17.269 Fixed Capacity Management: Not Supported 00:31:17.269 Variable Capacity Management: Not Supported 00:31:17.269 Delete Endurance Group: Not Supported 00:31:17.269 Delete NVM Set: Not Supported 00:31:17.269 Extended LBA Formats Supported: Not Supported 00:31:17.269 Flexible Data Placement Supported: Not Supported 00:31:17.269 00:31:17.269 Controller Memory Buffer Support 00:31:17.269 ================================ 00:31:17.269 Supported: No 00:31:17.269 00:31:17.269 Persistent Memory Region Support 00:31:17.269 ================================ 00:31:17.269 Supported: No 00:31:17.269 00:31:17.269 Admin Command Set Attributes 00:31:17.269 ============================ 00:31:17.269 Security Send/Receive: Not Supported 00:31:17.269 Format NVM: Not Supported 00:31:17.269 Firmware Activate/Download: Not Supported 00:31:17.269 Namespace Management: Not Supported 00:31:17.269 Device Self-Test: Not Supported 00:31:17.269 Directives: Not Supported 00:31:17.269 NVMe-MI: Not Supported 00:31:17.269 Virtualization Management: Not Supported 00:31:17.269 Doorbell Buffer Config: Not Supported 00:31:17.269 Get LBA Status Capability: Not Supported 00:31:17.269 Command & Feature Lockdown Capability: Not Supported 00:31:17.269 Abort Command Limit: 4 00:31:17.269 Async Event Request Limit: 4 00:31:17.269 Number of Firmware Slots: N/A 00:31:17.269 Firmware Slot 1 Read-Only: N/A 00:31:17.269 Firmware Activation Without Reset: N/A 00:31:17.269 Multiple Update Detection Support: N/A 00:31:17.269 Firmware Update Granularity: No Information Provided 00:31:17.269 Per-Namespace SMART Log: No 00:31:17.269 Asymmetric Namespace Access Log Page: Not Supported 00:31:17.269 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:17.269 Command Effects Log Page: Supported 00:31:17.269 Get Log Page Extended Data: Supported 00:31:17.269 Telemetry Log Pages: Not Supported 00:31:17.269 Persistent Event Log Pages: Not Supported 00:31:17.269 Supported Log Pages Log Page: May Support 00:31:17.269 Commands Supported & Effects Log Page: Not Supported 00:31:17.269 Feature Identifiers & Effects Log Page:May Support 00:31:17.269 NVMe-MI Commands & Effects Log Page: May Support 00:31:17.269 Data Area 4 for Telemetry Log: Not Supported 00:31:17.269 Error Log Page Entries Supported: 128 00:31:17.269 Keep Alive: Supported 00:31:17.269 Keep Alive Granularity: 10000 ms 00:31:17.269 00:31:17.269 NVM Command Set Attributes 00:31:17.269 ========================== 00:31:17.269 Submission Queue Entry Size 00:31:17.269 Max: 64 00:31:17.269 Min: 64 00:31:17.269 Completion Queue Entry Size 00:31:17.269 Max: 16 00:31:17.269 Min: 16 00:31:17.269 Number of Namespaces: 32 00:31:17.269 Compare Command: Supported 00:31:17.269 Write Uncorrectable Command: Not Supported 00:31:17.269 Dataset Management Command: Supported 00:31:17.269 Write Zeroes Command: Supported 00:31:17.269 Set Features Save Field: Not Supported 00:31:17.269 Reservations: Supported 00:31:17.269 Timestamp: Not Supported 00:31:17.269 Copy: Supported 00:31:17.269 Volatile Write Cache: Present 00:31:17.269 Atomic Write Unit (Normal): 1 00:31:17.269 Atomic Write Unit (PFail): 1 00:31:17.269 Atomic Compare & Write Unit: 1 00:31:17.269 Fused Compare & Write: Supported 00:31:17.269 Scatter-Gather List 00:31:17.269 SGL Command Set: Supported 00:31:17.269 SGL Keyed: Supported 00:31:17.269 SGL Bit Bucket Descriptor: Not Supported 00:31:17.269 SGL Metadata Pointer: Not Supported 00:31:17.269 Oversized SGL: Not Supported 00:31:17.269 SGL Metadata Address: Not Supported 00:31:17.269 SGL Offset: Supported 00:31:17.269 Transport SGL Data Block: Not Supported 00:31:17.269 Replay Protected Memory Block: Not Supported 00:31:17.269 00:31:17.269 Firmware Slot Information 00:31:17.269 ========================= 00:31:17.269 Active slot: 1 00:31:17.269 Slot 1 Firmware Revision: 25.01 00:31:17.269 00:31:17.269 00:31:17.269 Commands Supported and Effects 00:31:17.269 ============================== 00:31:17.269 Admin Commands 00:31:17.269 -------------- 00:31:17.269 Get Log Page (02h): Supported 00:31:17.269 Identify (06h): Supported 00:31:17.269 Abort (08h): Supported 00:31:17.269 Set Features (09h): Supported 00:31:17.269 Get Features (0Ah): Supported 00:31:17.269 Asynchronous Event Request (0Ch): Supported 00:31:17.269 Keep Alive (18h): Supported 00:31:17.269 I/O Commands 00:31:17.269 ------------ 00:31:17.269 Flush (00h): Supported LBA-Change 00:31:17.269 Write (01h): Supported LBA-Change 00:31:17.269 Read (02h): Supported 00:31:17.269 Compare (05h): Supported 00:31:17.269 Write Zeroes (08h): Supported LBA-Change 00:31:17.269 Dataset Management (09h): Supported LBA-Change 00:31:17.269 Copy (19h): Supported LBA-Change 00:31:17.269 00:31:17.269 Error Log 00:31:17.269 ========= 00:31:17.269 00:31:17.269 Arbitration 00:31:17.269 =========== 00:31:17.269 Arbitration Burst: 1 00:31:17.269 00:31:17.269 Power Management 00:31:17.269 ================ 00:31:17.269 Number of Power States: 1 00:31:17.269 Current Power State: Power State #0 00:31:17.269 Power State #0: 00:31:17.269 Max Power: 0.00 W 00:31:17.269 Non-Operational State: Operational 00:31:17.269 Entry Latency: Not Reported 00:31:17.269 Exit Latency: Not Reported 00:31:17.269 Relative Read Throughput: 0 00:31:17.269 Relative Read Latency: 0 00:31:17.269 Relative Write Throughput: 0 00:31:17.269 Relative Write Latency: 0 00:31:17.269 Idle Power: Not Reported 00:31:17.269 Active Power: Not Reported 00:31:17.269 Non-Operational Permissive Mode: Not Supported 00:31:17.269 00:31:17.269 Health Information 00:31:17.269 ================== 00:31:17.269 Critical Warnings: 00:31:17.269 Available Spare Space: OK 00:31:17.269 Temperature: OK 00:31:17.269 Device Reliability: OK 00:31:17.269 Read Only: No 00:31:17.269 Volatile Memory Backup: OK 00:31:17.269 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:17.270 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:17.270 Available Spare: 0% 00:31:17.270 Available Spare Threshold: 0% 00:31:17.270 Life Percentage [2024-11-06 15:35:44.664016] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664083] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664137] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:17.270 [2024-11-06 15:35:44.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664209] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664248] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664272] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664296] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664320] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664340] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:17.270 [2024-11-06 15:35:44.664351] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:17.270 [2024-11-06 15:35:44.664360] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664383] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664417] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664440] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664453] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664490] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664510] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664524] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664559] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664578] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664591] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664667] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664682] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664715] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664735] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664747] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664779] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664801] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664819] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664858] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664878] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664890] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664922] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.664933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.664942] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664956] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.664969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.664995] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.665004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.665018] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.665031] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.665044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.665061] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.665074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.665083] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.665097] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.665108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.669140] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.669165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.669179] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.669195] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.669212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:17.270 [2024-11-06 15:35:44.669248] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:17.270 [2024-11-06 15:35:44.669259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:31:17.270 [2024-11-06 15:35:44.669271] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x180800 00:31:17.270 [2024-11-06 15:35:44.669285] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:31:17.270 Used: 0% 00:31:17.270 Data Units Read: 0 00:31:17.270 Data Units Written: 0 00:31:17.270 Host Read Commands: 0 00:31:17.270 Host Write Commands: 0 00:31:17.270 Controller Busy Time: 0 minutes 00:31:17.270 Power Cycles: 0 00:31:17.271 Power On Hours: 0 hours 00:31:17.271 Unsafe Shutdowns: 0 00:31:17.271 Unrecoverable Media Errors: 0 00:31:17.271 Lifetime Error Log Entries: 0 00:31:17.271 Warning Temperature Time: 0 minutes 00:31:17.271 Critical Temperature Time: 0 minutes 00:31:17.271 00:31:17.271 Number of Queues 00:31:17.271 ================ 00:31:17.271 Number of I/O Submission Queues: 127 00:31:17.271 Number of I/O Completion Queues: 127 00:31:17.271 00:31:17.271 Active Namespaces 00:31:17.271 ================= 00:31:17.271 Namespace ID:1 00:31:17.271 Error Recovery Timeout: Unlimited 00:31:17.271 Command Set Identifier: NVM (00h) 00:31:17.271 Deallocate: Supported 00:31:17.271 Deallocated/Unwritten Error: Not Supported 00:31:17.271 Deallocated Read Value: Unknown 00:31:17.271 Deallocate in Write Zeroes: Not Supported 00:31:17.271 Deallocated Guard Field: 0xFFFF 00:31:17.271 Flush: Supported 00:31:17.271 Reservation: Supported 00:31:17.271 Namespace Sharing Capabilities: Multiple Controllers 00:31:17.271 Size (in LBAs): 131072 (0GiB) 00:31:17.271 Capacity (in LBAs): 131072 (0GiB) 00:31:17.271 Utilization (in LBAs): 131072 (0GiB) 00:31:17.271 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:17.271 EUI64: ABCDEF0123456789 00:31:17.271 UUID: 5338d6f7-c3bf-4d09-98ca-81a5831218ed 00:31:17.271 Thin Provisioning: Not Supported 00:31:17.271 Per-NS Atomic Units: Yes 00:31:17.271 Atomic Boundary Size (Normal): 0 00:31:17.271 Atomic Boundary Size (PFail): 0 00:31:17.271 Atomic Boundary Offset: 0 00:31:17.271 Maximum Single Source Range Length: 65535 00:31:17.271 Maximum Copy Length: 65535 00:31:17.271 Maximum Source Range Count: 1 00:31:17.271 NGUID/EUI64 Never Reused: No 00:31:17.271 Namespace Write Protected: No 00:31:17.271 Number of LBA Formats: 1 00:31:17.271 Current LBA Format: LBA Format #00 00:31:17.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:17.271 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:17.271 rmmod nvme_rdma 00:31:17.271 rmmod nvme_fabrics 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3225740 ']' 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3225740 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3225740 ']' 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3225740 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:17.271 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3225740 00:31:17.530 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:17.530 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:17.530 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3225740' 00:31:17.530 killing process with pid 3225740 00:31:17.530 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3225740 00:31:17.530 15:35:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3225740 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:19.436 00:31:19.436 real 0m11.244s 00:31:19.436 user 0m14.959s 00:31:19.436 sys 0m6.145s 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:19.436 ************************************ 00:31:19.436 END TEST nvmf_identify 00:31:19.436 ************************************ 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.436 ************************************ 00:31:19.436 START TEST nvmf_perf 00:31:19.436 ************************************ 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:19.436 * Looking for test storage... 00:31:19.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:19.436 15:35:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.436 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.695 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:19.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.695 --rc genhtml_branch_coverage=1 00:31:19.695 --rc genhtml_function_coverage=1 00:31:19.695 --rc genhtml_legend=1 00:31:19.695 --rc geninfo_all_blocks=1 00:31:19.695 --rc geninfo_unexecuted_blocks=1 00:31:19.695 00:31:19.696 ' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:19.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.696 --rc genhtml_branch_coverage=1 00:31:19.696 --rc genhtml_function_coverage=1 00:31:19.696 --rc genhtml_legend=1 00:31:19.696 --rc geninfo_all_blocks=1 00:31:19.696 --rc geninfo_unexecuted_blocks=1 00:31:19.696 00:31:19.696 ' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:19.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.696 --rc genhtml_branch_coverage=1 00:31:19.696 --rc genhtml_function_coverage=1 00:31:19.696 --rc genhtml_legend=1 00:31:19.696 --rc geninfo_all_blocks=1 00:31:19.696 --rc geninfo_unexecuted_blocks=1 00:31:19.696 00:31:19.696 ' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:19.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.696 --rc genhtml_branch_coverage=1 00:31:19.696 --rc genhtml_function_coverage=1 00:31:19.696 --rc genhtml_legend=1 00:31:19.696 --rc geninfo_all_blocks=1 00:31:19.696 --rc geninfo_unexecuted_blocks=1 00:31:19.696 00:31:19.696 ' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:19.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.696 15:35:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.266 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:26.267 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:26.267 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:26.267 Found net devices under 0000:18:00.0: mlx_0_0 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:26.267 Found net devices under 0000:18:00.1: mlx_0_1 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:26.267 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:26.268 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:26.268 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:31:26.268 altname enp24s0f0np0 00:31:26.268 altname ens785f0np0 00:31:26.268 inet 192.168.100.8/24 scope global mlx_0_0 00:31:26.268 valid_lft forever preferred_lft forever 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:26.268 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:26.528 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:26.528 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:31:26.528 altname enp24s0f1np1 00:31:26.528 altname ens785f1np1 00:31:26.528 inet 192.168.100.9/24 scope global mlx_0_1 00:31:26.528 valid_lft forever preferred_lft forever 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:26.528 192.168.100.9' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:26.528 192.168.100.9' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:26.528 192.168.100.9' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:26.528 15:35:53 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3229208 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3229208 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3229208 ']' 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:26.528 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:26.528 [2024-11-06 15:35:54.132907] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:26.528 [2024-11-06 15:35:54.133034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.788 [2024-11-06 15:35:54.281598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.788 [2024-11-06 15:35:54.392068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.788 [2024-11-06 15:35:54.392145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.788 [2024-11-06 15:35:54.392159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.788 [2024-11-06 15:35:54.392175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.788 [2024-11-06 15:35:54.392185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.788 [2024-11-06 15:35:54.394462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.788 [2024-11-06 15:35:54.394560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.788 [2024-11-06 15:35:54.394615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.788 [2024-11-06 15:35:54.394644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:27.358 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.617 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:27.617 15:35:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:30.905 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:30.905 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:30.905 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:31:30.905 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.164 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:31.164 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:31:31.164 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:31.164 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:31.164 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:31.164 [2024-11-06 15:35:58.738119] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:31.164 [2024-11-06 15:35:58.762609] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a1c0/0x7f6b0a348940) succeed. 00:31:31.164 [2024-11-06 15:35:58.772560] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a340/0x7f6b0a304940) succeed. 00:31:31.423 15:35:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.682 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:31.682 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:31.941 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:31.941 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:32.199 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:32.199 [2024-11-06 15:35:59.769454] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:32.199 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:32.457 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:31:32.457 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:31:32.457 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:32.457 15:35:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:31:33.834 Initializing NVMe Controllers 00:31:33.834 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:31:33.834 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:31:33.834 Initialization complete. Launching workers. 00:31:33.834 ======================================================== 00:31:33.834 Latency(us) 00:31:33.834 Device Information : IOPS MiB/s Average min max 00:31:33.834 PCIE (0000:5f:00.0) NSID 1 from core 0: 89598.89 350.00 356.38 10.42 7215.27 00:31:33.834 ======================================================== 00:31:33.835 Total : 89598.89 350.00 356.38 10.42 7215.27 00:31:33.835 00:31:34.093 15:36:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:37.382 Initializing NVMe Controllers 00:31:37.382 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.382 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.382 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:37.382 Initialization complete. Launching workers. 00:31:37.382 ======================================================== 00:31:37.382 Latency(us) 00:31:37.382 Device Information : IOPS MiB/s Average min max 00:31:37.382 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6041.99 23.60 165.25 52.99 7998.43 00:31:37.382 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4643.99 18.14 215.06 84.36 8044.05 00:31:37.382 ======================================================== 00:31:37.382 Total : 10685.98 41.74 186.90 52.99 8044.05 00:31:37.382 00:31:37.640 15:36:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:40.928 Initializing NVMe Controllers 00:31:40.928 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.928 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.928 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:40.928 Initialization complete. Launching workers. 00:31:40.928 ======================================================== 00:31:40.928 Latency(us) 00:31:40.928 Device Information : IOPS MiB/s Average min max 00:31:40.928 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15833.41 61.85 2026.72 562.37 6123.44 00:31:40.928 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.85 15.75 7970.13 6456.85 8314.98 00:31:40.928 ======================================================== 00:31:40.928 Total : 19865.26 77.60 3233.00 562.37 8314.98 00:31:40.928 00:31:41.187 15:36:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:41.187 15:36:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:46.460 Initializing NVMe Controllers 00:31:46.460 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.460 Controller IO queue size 128, less than required. 00:31:46.460 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.460 Controller IO queue size 128, less than required. 00:31:46.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.461 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:46.461 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:46.461 Initialization complete. Launching workers. 00:31:46.461 ======================================================== 00:31:46.461 Latency(us) 00:31:46.461 Device Information : IOPS MiB/s Average min max 00:31:46.461 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3154.00 788.50 42813.08 20687.42 372842.34 00:31:46.461 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3295.50 823.88 38065.66 20463.18 379265.40 00:31:46.461 ======================================================== 00:31:46.461 Total : 6449.50 1612.38 40387.29 20463.18 379265.40 00:31:46.461 00:31:46.461 15:36:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:31:46.461 No valid NVMe controllers or AIO or URING devices found 00:31:46.461 Initializing NVMe Controllers 00:31:46.461 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.461 Controller IO queue size 128, less than required. 00:31:46.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.461 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:46.461 Controller IO queue size 128, less than required. 00:31:46.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.461 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:46.461 WARNING: Some requested NVMe devices were skipped 00:31:46.461 15:36:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:31:51.737 Initializing NVMe Controllers 00:31:51.737 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.737 Controller IO queue size 128, less than required. 00:31:51.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.737 Controller IO queue size 128, less than required. 00:31:51.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.737 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.737 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:51.737 Initialization complete. Launching workers. 00:31:51.737 00:31:51.737 ==================== 00:31:51.737 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:51.737 RDMA transport: 00:31:51.737 dev name: mlx5_0 00:31:51.737 polls: 294836 00:31:51.737 idle_polls: 292674 00:31:51.737 completions: 33658 00:31:51.737 queued_requests: 1 00:31:51.737 total_send_wrs: 16829 00:31:51.737 send_doorbell_updates: 1943 00:31:51.737 total_recv_wrs: 16956 00:31:51.737 recv_doorbell_updates: 1945 00:31:51.737 --------------------------------- 00:31:51.737 00:31:51.737 ==================== 00:31:51.737 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:51.737 RDMA transport: 00:31:51.737 dev name: mlx5_0 00:31:51.737 polls: 294225 00:31:51.737 idle_polls: 293974 00:31:51.737 completions: 16130 00:31:51.737 queued_requests: 1 00:31:51.737 total_send_wrs: 8065 00:31:51.737 send_doorbell_updates: 239 00:31:51.737 total_recv_wrs: 8192 00:31:51.737 recv_doorbell_updates: 240 00:31:51.737 --------------------------------- 00:31:51.737 ======================================================== 00:31:51.737 Latency(us) 00:31:51.737 Device Information : IOPS MiB/s Average min max 00:31:51.737 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4204.65 1051.16 30869.68 14884.54 384949.24 00:31:51.737 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2014.87 503.72 65111.97 36962.67 395458.70 00:31:51.737 ======================================================== 00:31:51.737 Total : 6219.52 1554.88 41962.80 14884.54 395458.70 00:31:51.737 00:31:51.737 15:36:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:51.737 15:36:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.737 15:36:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:51.737 15:36:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5f:00.0 ']' 00:31:51.737 15:36:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=06d0756b-c263-47e9-aae5-a00e87d6a137 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 06d0756b-c263-47e9-aae5-a00e87d6a137 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=06d0756b-c263-47e9-aae5-a00e87d6a137 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:59.434 { 00:32:59.434 "uuid": "06d0756b-c263-47e9-aae5-a00e87d6a137", 00:32:59.434 "name": "lvs_0", 00:32:59.434 "base_bdev": "Nvme0n1", 00:32:59.434 "total_data_clusters": 1905857, 00:32:59.434 "free_clusters": 1905857, 00:32:59.434 "block_size": 512, 00:32:59.434 "cluster_size": 4194304 00:32:59.434 } 00:32:59.434 ]' 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="06d0756b-c263-47e9-aae5-a00e87d6a137") .free_clusters' 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1905857 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="06d0756b-c263-47e9-aae5-a00e87d6a137") .cluster_size' 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=7623428 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 7623428 00:32:59.434 7623428 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 7623428 -gt 20480 ']' 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:59.434 15:37:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06d0756b-c263-47e9-aae5-a00e87d6a137 lbd_0 20480 00:32:59.434 15:37:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=99b3487e-1242-46fc-8e45-122558b92687 00:32:59.434 15:37:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 99b3487e-1242-46fc-8e45-122558b92687 lvs_n_0 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=968d8e3c-840d-4987-8ba5-3a1dc12c9986 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 968d8e3c-840d-4987-8ba5-3a1dc12c9986 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=968d8e3c-840d-4987-8ba5-3a1dc12c9986 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:59.434 { 00:32:59.434 "uuid": "06d0756b-c263-47e9-aae5-a00e87d6a137", 00:32:59.434 "name": "lvs_0", 00:32:59.434 "base_bdev": "Nvme0n1", 00:32:59.434 "total_data_clusters": 1905857, 00:32:59.434 "free_clusters": 1900737, 00:32:59.434 "block_size": 512, 00:32:59.434 "cluster_size": 4194304 00:32:59.434 }, 00:32:59.434 { 00:32:59.434 "uuid": "968d8e3c-840d-4987-8ba5-3a1dc12c9986", 00:32:59.434 "name": "lvs_n_0", 00:32:59.434 "base_bdev": "99b3487e-1242-46fc-8e45-122558b92687", 00:32:59.434 "total_data_clusters": 5114, 00:32:59.434 "free_clusters": 5114, 00:32:59.434 "block_size": 512, 00:32:59.434 "cluster_size": 4194304 00:32:59.434 } 00:32:59.434 ]' 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="968d8e3c-840d-4987-8ba5-3a1dc12c9986") .free_clusters' 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:32:59.434 15:37:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="968d8e3c-840d-4987-8ba5-3a1dc12c9986") .cluster_size' 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:32:59.434 20456 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 968d8e3c-840d-4987-8ba5-3a1dc12c9986 lbd_nest_0 20456 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b6ae770e-2ff4-4177-8727-a80874cb9df8 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b6ae770e-2ff4-4177-8727-a80874cb9df8 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:59.434 15:37:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:09.491 Initializing NVMe Controllers 00:33:09.491 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:09.491 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:09.491 Initialization complete. Launching workers. 00:33:09.491 ======================================================== 00:33:09.491 Latency(us) 00:33:09.491 Device Information : IOPS MiB/s Average min max 00:33:09.491 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5040.78 2.46 198.06 79.67 7256.52 00:33:09.491 ======================================================== 00:33:09.491 Total : 5040.78 2.46 198.06 79.67 7256.52 00:33:09.491 00:33:09.491 15:37:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:09.491 15:37:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:21.783 Initializing NVMe Controllers 00:33:21.783 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.783 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:21.783 Initialization complete. Launching workers. 00:33:21.783 ======================================================== 00:33:21.783 Latency(us) 00:33:21.783 Device Information : IOPS MiB/s Average min max 00:33:21.783 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2407.03 300.88 415.03 174.26 7088.48 00:33:21.783 ======================================================== 00:33:21.783 Total : 2407.03 300.88 415.03 174.26 7088.48 00:33:21.783 00:33:21.783 15:37:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:21.783 15:37:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:21.783 15:37:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:31.767 Initializing NVMe Controllers 00:33:31.767 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:31.767 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:31.767 Initialization complete. Launching workers. 00:33:31.767 ======================================================== 00:33:31.767 Latency(us) 00:33:31.767 Device Information : IOPS MiB/s Average min max 00:33:31.767 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9985.32 4.88 3204.90 1234.16 7236.22 00:33:31.767 ======================================================== 00:33:31.767 Total : 9985.32 4.88 3204.90 1234.16 7236.22 00:33:31.767 00:33:32.027 15:37:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:32.027 15:37:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:44.238 Initializing NVMe Controllers 00:33:44.238 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:44.238 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:44.238 Initialization complete. Launching workers. 00:33:44.238 ======================================================== 00:33:44.238 Latency(us) 00:33:44.238 Device Information : IOPS MiB/s Average min max 00:33:44.238 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3960.00 495.00 8084.45 4909.59 23354.43 00:33:44.238 ======================================================== 00:33:44.238 Total : 3960.00 495.00 8084.45 4909.59 23354.43 00:33:44.238 00:33:44.238 15:38:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:44.238 15:38:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:44.238 15:38:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:56.451 Initializing NVMe Controllers 00:33:56.451 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:56.451 Controller IO queue size 128, less than required. 00:33:56.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:56.451 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:56.451 Initialization complete. Launching workers. 00:33:56.451 ======================================================== 00:33:56.451 Latency(us) 00:33:56.451 Device Information : IOPS MiB/s Average min max 00:33:56.451 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16106.70 7.86 7950.32 2515.38 16401.20 00:33:56.451 ======================================================== 00:33:56.451 Total : 16106.70 7.86 7950.32 2515.38 16401.20 00:33:56.451 00:33:56.451 15:38:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:56.451 15:38:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:06.434 Initializing NVMe Controllers 00:34:06.434 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:06.434 Controller IO queue size 128, less than required. 00:34:06.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:06.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:06.434 Initialization complete. Launching workers. 00:34:06.434 ======================================================== 00:34:06.434 Latency(us) 00:34:06.434 Device Information : IOPS MiB/s Average min max 00:34:06.434 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9263.80 1157.97 13815.73 3928.80 85434.58 00:34:06.434 ======================================================== 00:34:06.434 Total : 9263.80 1157.97 13815.73 3928.80 85434.58 00:34:06.434 00:34:06.693 15:38:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.952 15:38:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6ae770e-2ff4-4177-8727-a80874cb9df8 00:34:07.889 15:38:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:07.889 15:38:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 99b3487e-1242-46fc-8e45-122558b92687 00:34:08.458 15:38:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:08.458 rmmod nvme_rdma 00:34:08.458 rmmod nvme_fabrics 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3229208 ']' 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3229208 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3229208 ']' 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3229208 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:08.458 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3229208 00:34:08.717 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:08.717 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:08.717 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3229208' 00:34:08.717 killing process with pid 3229208 00:34:08.717 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3229208 00:34:08.717 15:38:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3229208 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:16.838 00:34:16.838 real 2m57.457s 00:34:16.838 user 11m21.038s 00:34:16.838 sys 0m8.749s 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:16.838 ************************************ 00:34:16.838 END TEST nvmf_perf 00:34:16.838 ************************************ 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.838 ************************************ 00:34:16.838 START TEST nvmf_fio_host 00:34:16.838 ************************************ 00:34:16.838 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:34:17.099 * Looking for test storage... 00:34:17.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:17.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.099 --rc genhtml_branch_coverage=1 00:34:17.099 --rc genhtml_function_coverage=1 00:34:17.099 --rc genhtml_legend=1 00:34:17.099 --rc geninfo_all_blocks=1 00:34:17.099 --rc geninfo_unexecuted_blocks=1 00:34:17.099 00:34:17.099 ' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:17.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.099 --rc genhtml_branch_coverage=1 00:34:17.099 --rc genhtml_function_coverage=1 00:34:17.099 --rc genhtml_legend=1 00:34:17.099 --rc geninfo_all_blocks=1 00:34:17.099 --rc geninfo_unexecuted_blocks=1 00:34:17.099 00:34:17.099 ' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:17.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.099 --rc genhtml_branch_coverage=1 00:34:17.099 --rc genhtml_function_coverage=1 00:34:17.099 --rc genhtml_legend=1 00:34:17.099 --rc geninfo_all_blocks=1 00:34:17.099 --rc geninfo_unexecuted_blocks=1 00:34:17.099 00:34:17.099 ' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:17.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.099 --rc genhtml_branch_coverage=1 00:34:17.099 --rc genhtml_function_coverage=1 00:34:17.099 --rc genhtml_legend=1 00:34:17.099 --rc geninfo_all_blocks=1 00:34:17.099 --rc geninfo_unexecuted_blocks=1 00:34:17.099 00:34:17.099 ' 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.099 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:17.100 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.100 15:38:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:34:25.228 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:34:25.228 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:25.228 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:34:25.229 Found net devices under 0000:18:00.0: mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:34:25.229 Found net devices under 0000:18:00.1: mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:25.229 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:25.229 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:34:25.229 altname enp24s0f0np0 00:34:25.229 altname ens785f0np0 00:34:25.229 inet 192.168.100.8/24 scope global mlx_0_0 00:34:25.229 valid_lft forever preferred_lft forever 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:25.229 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:25.229 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:34:25.229 altname enp24s0f1np1 00:34:25.229 altname ens785f1np1 00:34:25.229 inet 192.168.100.9/24 scope global mlx_0_1 00:34:25.229 valid_lft forever preferred_lft forever 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:25.229 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:25.229 192.168.100.9' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:25.230 192.168.100.9' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:25.230 192.168.100.9' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3254217 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3254217 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3254217 ']' 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:25.230 15:38:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.230 [2024-11-06 15:38:51.687233] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:25.230 [2024-11-06 15:38:51.687360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.230 [2024-11-06 15:38:51.841812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.230 [2024-11-06 15:38:51.950861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.230 [2024-11-06 15:38:51.950916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.230 [2024-11-06 15:38:51.950929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.230 [2024-11-06 15:38:51.950943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.230 [2024-11-06 15:38:51.950953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.230 [2024-11-06 15:38:51.953284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.230 [2024-11-06 15:38:51.953225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.230 [2024-11-06 15:38:51.953267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.230 [2024-11-06 15:38:51.953309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.230 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:25.230 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:34:25.230 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:25.230 [2024-11-06 15:38:52.680566] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f9fb4984940) succeed. 00:34:25.230 [2024-11-06 15:38:52.690153] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f9fb493e940) succeed. 00:34:25.489 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:25.489 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:25.489 15:38:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.489 15:38:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:25.749 Malloc1 00:34:25.749 15:38:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:26.008 15:38:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:26.267 15:38:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:26.267 [2024-11-06 15:38:53.896669] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:26.527 15:38:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:26.527 15:38:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:27.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:27.093 fio-3.35 00:34:27.093 Starting 1 thread 00:34:29.623 00:34:29.623 test: (groupid=0, jobs=1): err= 0: pid=3254793: Wed Nov 6 15:38:56 2024 00:34:29.623 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2004msec) 00:34:29.623 slat (nsec): min=1522, max=41347, avg=1672.52, stdev=484.48 00:34:29.623 clat (usec): min=2806, max=7941, avg=4318.69, stdev=112.54 00:34:29.623 lat (usec): min=2829, max=7943, avg=4320.36, stdev=112.48 00:34:29.623 clat percentiles (usec): 00:34:29.623 | 1.00th=[ 4293], 5.00th=[ 4293], 10.00th=[ 4293], 20.00th=[ 4293], 00:34:29.623 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:34:29.623 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4359], 95.00th=[ 4359], 00:34:29.623 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 6259], 99.95th=[ 7308], 00:34:29.623 | 99.99th=[ 7898] 00:34:29.623 bw ( KiB/s): min=57208, max=59776, per=99.98%, avg=58774.00, stdev=1125.50, samples=4 00:34:29.623 iops : min=14302, max=14944, avg=14693.50, stdev=281.37, samples=4 00:34:29.623 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(115MiB/2004msec); 0 zone resets 00:34:29.623 slat (nsec): min=1556, max=18303, avg=2076.07, stdev=530.38 00:34:29.623 clat (usec): min=2833, max=7898, avg=4315.83, stdev=103.29 00:34:29.623 lat (usec): min=2849, max=7900, avg=4317.91, stdev=103.22 00:34:29.623 clat percentiles (usec): 00:34:29.623 | 1.00th=[ 4293], 5.00th=[ 4293], 10.00th=[ 4293], 20.00th=[ 4293], 00:34:29.623 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4293], 00:34:29.623 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4359], 95.00th=[ 4359], 00:34:29.623 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 6194], 99.95th=[ 6849], 00:34:29.623 | 99.99th=[ 7439] 00:34:29.623 bw ( KiB/s): min=57536, max=59656, per=99.99%, avg=58858.00, stdev=925.20, samples=4 00:34:29.623 iops : min=14384, max=14914, avg=14714.50, stdev=231.30, samples=4 00:34:29.623 lat (msec) : 4=0.33%, 10=99.67% 00:34:29.623 cpu : usr=99.20%, sys=0.40%, ctx=15, majf=0, minf=1468 00:34:29.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:29.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:29.624 issued rwts: total=29451,29492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:29.624 00:34:29.624 Run status group 0 (all jobs): 00:34:29.624 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2004-2004msec 00:34:29.624 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=115MiB (121MB), run=2004-2004msec 00:34:29.882 ----------------------------------------------------- 00:34:29.882 Suppressions used: 00:34:29.882 count bytes template 00:34:29.882 1 63 /usr/src/fio/parse.c 00:34:29.882 1 8 libtcmalloc_minimal.so 00:34:29.882 ----------------------------------------------------- 00:34:29.882 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:29.882 15:38:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:34:30.140 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:30.140 fio-3.35 00:34:30.140 Starting 1 thread 00:34:32.668 00:34:32.668 test: (groupid=0, jobs=1): err= 0: pid=3255240: Wed Nov 6 15:39:00 2024 00:34:32.668 read: IOPS=11.7k, BW=183MiB/s (192MB/s)(361MiB/1972msec) 00:34:32.668 slat (nsec): min=2548, max=58769, avg=3016.74, stdev=1316.09 00:34:32.668 clat (usec): min=660, max=9660, avg=1959.69, stdev=1618.38 00:34:32.668 lat (usec): min=663, max=9666, avg=1962.71, stdev=1618.88 00:34:32.668 clat percentiles (usec): 00:34:32.668 | 1.00th=[ 840], 5.00th=[ 955], 10.00th=[ 1029], 20.00th=[ 1139], 00:34:32.668 | 30.00th=[ 1221], 40.00th=[ 1303], 50.00th=[ 1418], 60.00th=[ 1549], 00:34:32.668 | 70.00th=[ 1680], 80.00th=[ 1876], 90.00th=[ 4490], 95.00th=[ 6259], 00:34:32.668 | 99.00th=[ 8094], 99.50th=[ 8717], 99.90th=[ 9241], 99.95th=[ 9372], 00:34:32.668 | 99.99th=[ 9634] 00:34:32.668 bw ( KiB/s): min=89504, max=93216, per=49.23%, avg=92272.00, stdev=1845.39, samples=4 00:34:32.668 iops : min= 5594, max= 5826, avg=5767.00, stdev=115.34, samples=4 00:34:32.668 write: IOPS=6672, BW=104MiB/s (109MB/s)(188MiB/1801msec); 0 zone resets 00:34:32.668 slat (nsec): min=27383, max=85608, avg=30608.38, stdev=3867.62 00:34:32.668 clat (usec): min=5366, max=24803, avg=15847.30, stdev=2282.66 00:34:32.668 lat (usec): min=5395, max=24835, avg=15877.91, stdev=2282.56 00:34:32.668 clat percentiles (usec): 00:34:32.668 | 1.00th=[ 9765], 5.00th=[12780], 10.00th=[13566], 20.00th=[14091], 00:34:32.668 | 30.00th=[14615], 40.00th=[15139], 50.00th=[15664], 60.00th=[16188], 00:34:32.668 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[20055], 00:34:32.668 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23725], 99.95th=[23987], 00:34:32.668 | 99.99th=[24773] 00:34:32.668 bw ( KiB/s): min=92736, max=97280, per=90.05%, avg=96144.00, stdev=2272.00, samples=4 00:34:32.668 iops : min= 5796, max= 6080, avg=6009.00, stdev=142.00, samples=4 00:34:32.668 lat (usec) : 750=0.09%, 1000=5.05% 00:34:32.668 lat (msec) : 2=49.95%, 4=3.57%, 10=7.49%, 20=32.09%, 50=1.76% 00:34:32.668 cpu : usr=96.46%, sys=2.04%, ctx=167, majf=0, minf=8894 00:34:32.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:32.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:32.668 issued rwts: total=23099,12018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:32.668 00:34:32.668 Run status group 0 (all jobs): 00:34:32.668 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=361MiB (378MB), run=1972-1972msec 00:34:32.668 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=188MiB (197MB), run=1801-1801msec 00:34:32.925 ----------------------------------------------------- 00:34:32.925 Suppressions used: 00:34:32.925 count bytes template 00:34:32.925 1 63 /usr/src/fio/parse.c 00:34:32.925 453 43488 /usr/src/fio/iolog.c 00:34:32.925 1 8 libtcmalloc_minimal.so 00:34:32.925 ----------------------------------------------------- 00:34:32.925 00:34:32.925 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5f:00.0 00:34:33.183 15:39:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5f:00.0 -i 192.168.100.8 00:34:36.463 Nvme0n1 00:34:36.463 15:39:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=571e583f-b4fa-4dae-8a2f-9fc238847618 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 571e583f-b4fa-4dae-8a2f-9fc238847618 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=571e583f-b4fa-4dae-8a2f-9fc238847618 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:35:44.301 { 00:35:44.301 "uuid": "571e583f-b4fa-4dae-8a2f-9fc238847618", 00:35:44.301 "name": "lvs_0", 00:35:44.301 "base_bdev": "Nvme0n1", 00:35:44.301 "total_data_clusters": 7451, 00:35:44.301 "free_clusters": 7451, 00:35:44.301 "block_size": 512, 00:35:44.301 "cluster_size": 1073741824 00:35:44.301 } 00:35:44.301 ]' 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="571e583f-b4fa-4dae-8a2f-9fc238847618") .free_clusters' 00:35:44.301 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=7451 00:35:44.302 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="571e583f-b4fa-4dae-8a2f-9fc238847618") .cluster_size' 00:35:44.302 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:35:44.302 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=7629824 00:35:44.302 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 7629824 00:35:44.302 7629824 00:35:44.302 15:40:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 7629824 00:35:44.302 ebeba61e-3fcd-4930-bf89-97dd6e9a4924 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:44.302 15:40:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:35:44.302 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:44.302 fio-3.35 00:35:44.302 Starting 1 thread 00:35:44.302 00:35:44.302 test: (groupid=0, jobs=1): err= 0: pid=3264151: Wed Nov 6 15:40:06 2024 00:35:44.302 read: IOPS=716, BW=2867KiB/s (2936kB/s)(5752KiB/2006msec) 00:35:44.302 slat (nsec): min=1526, max=43877, avg=1856.45, stdev=1321.31 00:35:44.302 clat (usec): min=295, max=1823.9k, avg=89441.99, stdev=379708.38 00:35:44.302 lat (usec): min=296, max=1824.0k, avg=89443.85, stdev=379708.60 00:35:44.302 clat percentiles (msec): 00:35:44.302 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:35:44.302 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:35:44.302 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 11], 00:35:44.302 | 99.00th=[ 1821], 99.50th=[ 1821], 99.90th=[ 1821], 99.95th=[ 1821], 00:35:44.302 | 99.99th=[ 1821] 00:35:44.302 bw ( KiB/s): min= 496, max=10496, per=100.00%, avg=5496.00, stdev=7071.07, samples=2 00:35:44.302 iops : min= 124, max= 2624, avg=1374.00, stdev=1767.77, samples=2 00:35:44.302 write: IOPS=751, BW=3005KiB/s (3077kB/s)(6028KiB/2006msec); 0 zone resets 00:35:44.302 slat (nsec): min=1585, max=9193, avg=2328.95, stdev=854.99 00:35:44.302 clat (usec): min=163, max=1824.4k, avg=80917.86, stdev=360654.87 00:35:44.302 lat (usec): min=166, max=1824.4k, avg=80920.19, stdev=360655.19 00:35:44.302 clat percentiles (msec): 00:35:44.302 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:35:44.302 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:35:44.302 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 11], 00:35:44.302 | 99.00th=[ 1821], 99.50th=[ 1821], 99.90th=[ 1821], 99.95th=[ 1821], 00:35:44.302 | 99.99th=[ 1821] 00:35:44.302 bw ( KiB/s): min= 464, max=11088, per=100.00%, avg=5776.00, stdev=7512.30, samples=2 00:35:44.302 iops : min= 116, max= 2772, avg=1444.00, stdev=1878.08, samples=2 00:35:44.302 lat (usec) : 250=0.07%, 500=0.07%, 750=0.07%, 1000=0.24% 00:35:44.302 lat (msec) : 2=0.27%, 4=1.56%, 10=92.33%, 20=1.05%, 2000=4.35% 00:35:44.302 cpu : usr=99.45%, sys=0.15%, ctx=16, majf=0, minf=1748 00:35:44.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:35:44.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:44.302 issued rwts: total=1438,1507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:44.302 00:35:44.302 Run status group 0 (all jobs): 00:35:44.302 READ: bw=2867KiB/s (2936kB/s), 2867KiB/s-2867KiB/s (2936kB/s-2936kB/s), io=5752KiB (5890kB), run=2006-2006msec 00:35:44.302 WRITE: bw=3005KiB/s (3077kB/s), 3005KiB/s-3005KiB/s (3077kB/s-3077kB/s), io=6028KiB (6173kB), run=2006-2006msec 00:35:44.302 ----------------------------------------------------- 00:35:44.302 Suppressions used: 00:35:44.302 count bytes template 00:35:44.302 1 64 /usr/src/fio/parse.c 00:35:44.302 1 8 libtcmalloc_minimal.so 00:35:44.302 ----------------------------------------------------- 00:35:44.302 00:35:44.302 15:40:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:44.302 15:40:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=268f5bbb-2f24-4fe0-b517-3c79564c5ca4 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 268f5bbb-2f24-4fe0-b517-3c79564c5ca4 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=268f5bbb-2f24-4fe0-b517-3c79564c5ca4 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:35:44.302 { 00:35:44.302 "uuid": "571e583f-b4fa-4dae-8a2f-9fc238847618", 00:35:44.302 "name": "lvs_0", 00:35:44.302 "base_bdev": "Nvme0n1", 00:35:44.302 "total_data_clusters": 7451, 00:35:44.302 "free_clusters": 0, 00:35:44.302 "block_size": 512, 00:35:44.302 "cluster_size": 1073741824 00:35:44.302 }, 00:35:44.302 { 00:35:44.302 "uuid": "268f5bbb-2f24-4fe0-b517-3c79564c5ca4", 00:35:44.302 "name": "lvs_n_0", 00:35:44.302 "base_bdev": "ebeba61e-3fcd-4930-bf89-97dd6e9a4924", 00:35:44.302 "total_data_clusters": 1905593, 00:35:44.302 "free_clusters": 1905593, 00:35:44.302 "block_size": 512, 00:35:44.302 "cluster_size": 4194304 00:35:44.302 } 00:35:44.302 ]' 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="268f5bbb-2f24-4fe0-b517-3c79564c5ca4") .free_clusters' 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1905593 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="268f5bbb-2f24-4fe0-b517-3c79564c5ca4") .cluster_size' 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=7622372 00:35:44.302 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 7622372 00:35:44.302 7622372 00:35:44.303 15:40:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 7622372 00:36:16.380 49950cb7-528b-43c8-ba96-6572c29770a3 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:36:16.380 15:40:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:16.380 15:40:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:16.380 15:40:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:16.380 15:40:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:36:16.380 15:40:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:16.380 15:40:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:36:16.380 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:16.380 fio-3.35 00:36:16.380 Starting 1 thread 00:36:17.757 00:36:17.757 test: (groupid=0, jobs=1): err= 0: pid=3269097: Wed Nov 6 15:40:44 2024 00:36:17.757 read: IOPS=8666, BW=33.9MiB/s (35.5MB/s)(67.9MiB/2006msec) 00:36:17.757 slat (nsec): min=1541, max=45775, avg=1762.39, stdev=489.25 00:36:17.757 clat (usec): min=4818, max=11855, avg=7191.70, stdev=203.36 00:36:17.757 lat (usec): min=4822, max=11857, avg=7193.46, stdev=203.31 00:36:17.757 clat percentiles (usec): 00:36:17.757 | 1.00th=[ 7046], 5.00th=[ 7111], 10.00th=[ 7111], 20.00th=[ 7177], 00:36:17.757 | 30.00th=[ 7177], 40.00th=[ 7177], 50.00th=[ 7177], 60.00th=[ 7177], 00:36:17.757 | 70.00th=[ 7242], 80.00th=[ 7242], 90.00th=[ 7242], 95.00th=[ 7242], 00:36:17.757 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[11731], 00:36:17.757 | 99.99th=[11863] 00:36:17.757 bw ( KiB/s): min=31496, max=35896, per=99.92%, avg=34636.00, stdev=2099.26, samples=4 00:36:17.757 iops : min= 7874, max= 8974, avg=8659.00, stdev=524.81, samples=4 00:36:17.757 write: IOPS=8658, BW=33.8MiB/s (35.5MB/s)(67.8MiB/2006msec); 0 zone resets 00:36:17.757 slat (nsec): min=1583, max=14432, avg=2192.11, stdev=407.84 00:36:17.757 clat (usec): min=4832, max=11840, avg=7223.32, stdev=193.23 00:36:17.757 lat (usec): min=4840, max=11842, avg=7225.51, stdev=193.20 00:36:17.757 clat percentiles (usec): 00:36:17.757 | 1.00th=[ 7111], 5.00th=[ 7111], 10.00th=[ 7177], 20.00th=[ 7177], 00:36:17.757 | 30.00th=[ 7177], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7242], 00:36:17.757 | 70.00th=[ 7242], 80.00th=[ 7242], 90.00th=[ 7308], 95.00th=[ 7308], 00:36:17.757 | 99.00th=[ 7373], 99.50th=[ 7439], 99.90th=[10028], 99.95th=[11731], 00:36:17.757 | 99.99th=[11863] 00:36:17.757 bw ( KiB/s): min=32256, max=35536, per=99.95%, avg=34616.00, stdev=1577.02, samples=4 00:36:17.757 iops : min= 8064, max= 8884, avg=8654.00, stdev=394.26, samples=4 00:36:17.757 lat (msec) : 10=99.88%, 20=0.12% 00:36:17.757 cpu : usr=99.35%, sys=0.25%, ctx=15, majf=0, minf=1789 00:36:17.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:17.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:17.757 issued rwts: total=17384,17369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:17.757 00:36:17.757 Run status group 0 (all jobs): 00:36:17.757 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2006-2006msec 00:36:17.757 WRITE: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.8MiB (71.1MB), run=2006-2006msec 00:36:17.757 ----------------------------------------------------- 00:36:17.757 Suppressions used: 00:36:17.757 count bytes template 00:36:17.757 1 64 /usr/src/fio/parse.c 00:36:17.757 1 8 libtcmalloc_minimal.so 00:36:17.757 ----------------------------------------------------- 00:36:17.757 00:36:17.757 15:40:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:36:18.015 15:40:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:36:18.015 15:40:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:37:54.465 15:42:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:37:54.465 15:42:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:38:50.679 15:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:38:50.679 15:43:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:39:00.647 rmmod nvme_rdma 00:39:00.647 rmmod nvme_fabrics 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3254217 ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3254217 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3254217 ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3254217 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3254217 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3254217' 00:39:00.647 killing process with pid 3254217 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3254217 00:39:00.647 15:43:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3254217 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:39:01.215 00:39:01.215 real 4m44.227s 00:39:01.215 user 18m6.700s 00:39:01.215 sys 0m50.344s 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.215 ************************************ 00:39:01.215 END TEST nvmf_fio_host 00:39:01.215 ************************************ 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.215 ************************************ 00:39:01.215 START TEST nvmf_failover 00:39:01.215 ************************************ 00:39:01.215 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:39:01.215 * Looking for test storage... 00:39:01.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:01.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.474 --rc genhtml_branch_coverage=1 00:39:01.474 --rc genhtml_function_coverage=1 00:39:01.474 --rc genhtml_legend=1 00:39:01.474 --rc geninfo_all_blocks=1 00:39:01.474 --rc geninfo_unexecuted_blocks=1 00:39:01.474 00:39:01.474 ' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:01.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.474 --rc genhtml_branch_coverage=1 00:39:01.474 --rc genhtml_function_coverage=1 00:39:01.474 --rc genhtml_legend=1 00:39:01.474 --rc geninfo_all_blocks=1 00:39:01.474 --rc geninfo_unexecuted_blocks=1 00:39:01.474 00:39:01.474 ' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:01.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.474 --rc genhtml_branch_coverage=1 00:39:01.474 --rc genhtml_function_coverage=1 00:39:01.474 --rc genhtml_legend=1 00:39:01.474 --rc geninfo_all_blocks=1 00:39:01.474 --rc geninfo_unexecuted_blocks=1 00:39:01.474 00:39:01.474 ' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:01.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.474 --rc genhtml_branch_coverage=1 00:39:01.474 --rc genhtml_function_coverage=1 00:39:01.474 --rc genhtml_legend=1 00:39:01.474 --rc geninfo_all_blocks=1 00:39:01.474 --rc geninfo_unexecuted_blocks=1 00:39:01.474 00:39:01.474 ' 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.474 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:01.475 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.475 15:43:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:08.044 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:39:08.045 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:39:08.045 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:39:08.045 Found net devices under 0000:18:00.0: mlx_0_0 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:39:08.045 Found net devices under 0000:18:00.1: mlx_0_1 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:39:08.045 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:39:08.306 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:08.306 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:39:08.306 altname enp24s0f0np0 00:39:08.306 altname ens785f0np0 00:39:08.306 inet 192.168.100.8/24 scope global mlx_0_0 00:39:08.306 valid_lft forever preferred_lft forever 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:39:08.306 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:08.306 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:39:08.306 altname enp24s0f1np1 00:39:08.306 altname ens785f1np1 00:39:08.306 inet 192.168.100.9/24 scope global mlx_0_1 00:39:08.306 valid_lft forever preferred_lft forever 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:39:08.306 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:39:08.307 192.168.100.9' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:39:08.307 192.168.100.9' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:39:08.307 192.168.100.9' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3293344 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3293344 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3293344 ']' 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:08.307 15:43:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:08.566 [2024-11-06 15:43:35.980073] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:08.566 [2024-11-06 15:43:35.980188] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.566 [2024-11-06 15:43:36.130087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:08.825 [2024-11-06 15:43:36.234832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.825 [2024-11-06 15:43:36.234881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.825 [2024-11-06 15:43:36.234893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.825 [2024-11-06 15:43:36.234908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.825 [2024-11-06 15:43:36.234917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.825 [2024-11-06 15:43:36.236961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.825 [2024-11-06 15:43:36.236977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.825 [2024-11-06 15:43:36.237008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:09.391 15:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:39:09.650 [2024-11-06 15:43:37.041282] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f4883fbd940) succeed. 00:39:09.650 [2024-11-06 15:43:37.050719] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f4883f79940) succeed. 00:39:09.908 15:43:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:09.908 Malloc0 00:39:10.166 15:43:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:10.166 15:43:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:10.425 15:43:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:10.683 [2024-11-06 15:43:38.147762] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:39:10.683 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:39:10.941 [2024-11-06 15:43:38.340261] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:39:10.941 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:39:10.941 [2024-11-06 15:43:38.557051] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3293691 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3293691 /var/tmp/bdevperf.sock 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3293691 ']' 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:11.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:11.200 15:43:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:12.134 15:43:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:12.134 15:43:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:39:12.134 15:43:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:12.134 NVMe0n1 00:39:12.392 15:43:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:12.392 00:39:12.650 15:43:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3293895 00:39:12.651 15:43:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:12.651 15:43:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:13.583 15:43:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:13.841 15:43:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:17.125 15:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:17.125 00:39:17.125 15:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:39:17.125 15:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:20.409 15:43:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:20.409 [2024-11-06 15:43:47.932468] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:39:20.409 15:43:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:21.344 15:43:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:39:21.603 15:43:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3293895 00:39:28.282 { 00:39:28.283 "results": [ 00:39:28.283 { 00:39:28.283 "job": "NVMe0n1", 00:39:28.283 "core_mask": "0x1", 00:39:28.283 "workload": "verify", 00:39:28.283 "status": "finished", 00:39:28.283 "verify_range": { 00:39:28.283 "start": 0, 00:39:28.283 "length": 16384 00:39:28.283 }, 00:39:28.283 "queue_depth": 128, 00:39:28.283 "io_size": 4096, 00:39:28.283 "runtime": 15.006093, 00:39:28.283 "iops": 11891.636284008102, 00:39:28.283 "mibps": 46.45170423440665, 00:39:28.283 "io_failed": 4475, 00:39:28.283 "io_timeout": 0, 00:39:28.283 "avg_latency_us": 10477.903425237559, 00:39:28.283 "min_latency_us": 520.0139130434783, 00:39:28.283 "max_latency_us": 1050399.6104347827 00:39:28.283 } 00:39:28.283 ], 00:39:28.283 "core_count": 1 00:39:28.283 } 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3293691 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3293691 ']' 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3293691 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293691 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293691' 00:39:28.283 killing process with pid 3293691 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3293691 00:39:28.283 15:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3293691 00:39:28.857 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:28.858 [2024-11-06 15:43:38.673728] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:28.858 [2024-11-06 15:43:38.673843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293691 ] 00:39:28.858 [2024-11-06 15:43:38.806526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.858 [2024-11-06 15:43:38.916032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.858 Running I/O for 15 seconds... 00:39:28.858 15107.00 IOPS, 59.01 MiB/s [2024-11-06T14:43:56.493Z] 8256.00 IOPS, 32.25 MiB/s [2024-11-06T14:43:56.493Z] [2024-11-06 15:43:42.250352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.250980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.858 [2024-11-06 15:43:42.251298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.858 [2024-11-06 15:43:42.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x183000 00:39:28.858 [2024-11-06 15:43:42.251521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.251979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.251993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.859 [2024-11-06 15:43:42.252607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x183000 00:39:28.859 [2024-11-06 15:43:42.252623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.252983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.252997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.860 [2024-11-06 15:43:42.253707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x183000 00:39:28.860 [2024-11-06 15:43:42.253722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.253979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.253995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x183000 00:39:28.861 [2024-11-06 15:43:42.254340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.256418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:28.861 [2024-11-06 15:43:42.256447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:28.861 [2024-11-06 15:43:42.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1816 len:8 PRP1 0x0 PRP2 0x0 00:39:28.861 [2024-11-06 15:43:42.256479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:42.256696] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:39:28.861 [2024-11-06 15:43:42.256717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:39:28.861 [2024-11-06 15:43:42.259872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:28.861 [2024-11-06 15:43:42.288258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:39:28.861 [2024-11-06 15:43:42.332169] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:39:28.861 9607.67 IOPS, 37.53 MiB/s [2024-11-06T14:43:56.496Z] 10970.00 IOPS, 42.85 MiB/s [2024-11-06T14:43:56.496Z] 10464.00 IOPS, 40.88 MiB/s [2024-11-06T14:43:56.496Z] [2024-11-06 15:43:45.735140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.861 [2024-11-06 15:43:45.735208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.861 [2024-11-06 15:43:45.735260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.861 [2024-11-06 15:43:45.735291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.861 [2024-11-06 15:43:45.735322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x181b00 00:39:28.861 [2024-11-06 15:43:45.735585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.861 [2024-11-06 15:43:45.735604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.735830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.735891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.735928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.735958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.735976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.735989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.736020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.736050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x181b00 00:39:28.862 [2024-11-06 15:43:45.736079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.862 [2024-11-06 15:43:45.736587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.862 [2024-11-06 15:43:45.736600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.736631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.736870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.736899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.736929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.736958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.736978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.736991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x181b00 00:39:28.863 [2024-11-06 15:43:45.737359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.863 [2024-11-06 15:43:45.737717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.863 [2024-11-06 15:43:45.737735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.737750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.737778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.737985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.737997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x181b00 00:39:28.864 [2024-11-06 15:43:45.738585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.864 [2024-11-06 15:43:45.738825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.864 [2024-11-06 15:43:45.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:45.738865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:45.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:45.738917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:45.738943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:45.738968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.738982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x181b00 00:39:28.865 [2024-11-06 15:43:45.738995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.739009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x181b00 00:39:28.865 [2024-11-06 15:43:45.739021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.739036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x181b00 00:39:28.865 [2024-11-06 15:43:45.739048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.741040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:28.865 [2024-11-06 15:43:45.741065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:28.865 [2024-11-06 15:43:45.741079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38032 len:8 PRP1 0x0 PRP2 0x0 00:39:28.865 [2024-11-06 15:43:45.741093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:45.741302] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:39:28.865 [2024-11-06 15:43:45.741320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:39:28.865 [2024-11-06 15:43:45.744461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:39:28.865 [2024-11-06 15:43:45.773088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:39:28.865 [2024-11-06 15:43:45.813483] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:39:28.865 9593.67 IOPS, 37.48 MiB/s [2024-11-06T14:43:56.500Z] 10398.86 IOPS, 40.62 MiB/s [2024-11-06T14:43:56.500Z] 11001.88 IOPS, 42.98 MiB/s [2024-11-06T14:43:56.500Z] 11429.22 IOPS, 44.65 MiB/s [2024-11-06T14:43:56.500Z] [2024-11-06 15:43:50.152459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.865 [2024-11-06 15:43:50.152526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.152546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.865 [2024-11-06 15:43:50.152559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.152572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.865 [2024-11-06 15:43:50.152585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.152598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.865 [2024-11-06 15:43:50.152611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:39:28.865 [2024-11-06 15:43:50.154420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:39:28.865 [2024-11-06 15:43:50.154438] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:39:28.865 [2024-11-06 15:43:50.154453] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:39:28.865 [2024-11-06 15:43:50.154482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.154497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.154975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x183000 00:39:28.865 [2024-11-06 15:43:50.155427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.865 [2024-11-06 15:43:50.155524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.865 [2024-11-06 15:43:50.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.155997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x183000 00:39:28.866 [2024-11-06 15:43:50.156724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.156974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.156989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.866 [2024-11-06 15:43:50.157524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.866 [2024-11-06 15:43:50.157560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.157955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.867 [2024-11-06 15:43:50.158840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.158983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.159037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.159051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.159091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.159105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.159158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.159174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.159216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x183000 00:39:28.867 [2024-11-06 15:43:50.159230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.867 [2024-11-06 15:43:50.159270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.159722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.159777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.159829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.159882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.159934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.159986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.160567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.160950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.160991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x183000 00:39:28.868 [2024-11-06 15:43:50.161005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.161044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.161058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.161096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.161111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.161158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.161174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.161213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.161228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.868 [2024-11-06 15:43:50.161267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.868 [2024-11-06 15:43:50.161282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.869 [2024-11-06 15:43:50.161322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:28.869 [2024-11-06 15:43:50.161337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.869 [2024-11-06 15:43:50.189458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:28.869 [2024-11-06 15:43:50.189488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:28.869 [2024-11-06 15:43:50.189507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60640 len:8 PRP1 0x0 PRP2 0x0 00:39:28.869 [2024-11-06 15:43:50.189522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.869 [2024-11-06 15:43:50.189771] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Unable to perform failover, already in progress. 00:39:28.869 [2024-11-06 15:43:50.189820] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Unable to perform failover, already in progress. 00:39:28.869 [2024-11-06 15:43:50.194302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:39:28.869 10286.30 IOPS, 40.18 MiB/s [2024-11-06T14:43:56.504Z] [2024-11-06 15:43:50.236300] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:39:28.869 10674.45 IOPS, 41.70 MiB/s [2024-11-06T14:43:56.504Z] 11060.58 IOPS, 43.21 MiB/s [2024-11-06T14:43:56.504Z] 11387.00 IOPS, 44.48 MiB/s [2024-11-06T14:43:56.504Z] 11666.14 IOPS, 45.57 MiB/s 00:39:28.869 Latency(us) 00:39:28.869 [2024-11-06T14:43:56.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:28.869 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:28.869 Verification LBA range: start 0x0 length 0x4000 00:39:28.869 NVMe0n1 : 15.01 11891.64 46.45 298.21 0.00 10477.90 520.01 1050399.61 00:39:28.869 [2024-11-06T14:43:56.504Z] =================================================================================================================== 00:39:28.869 [2024-11-06T14:43:56.504Z] Total : 11891.64 46.45 298.21 0.00 10477.90 520.01 1050399.61 00:39:28.869 Received shutdown signal, test time was about 15.000000 seconds 00:39:28.869 00:39:28.869 Latency(us) 00:39:28.869 [2024-11-06T14:43:56.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:28.869 [2024-11-06T14:43:56.504Z] =================================================================================================================== 00:39:28.869 [2024-11-06T14:43:56.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3295922 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3295922 /var/tmp/bdevperf.sock 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3295922 ']' 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:28.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:28.869 15:43:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:29.804 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:29.804 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:39:29.804 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:39:29.804 [2024-11-06 15:43:57.402980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:39:29.804 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:39:30.062 [2024-11-06 15:43:57.603746] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:39:30.062 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:30.320 NVMe0n1 00:39:30.320 15:43:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:30.577 00:39:30.577 15:43:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:30.835 00:39:30.835 15:43:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:30.835 15:43:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:39:31.093 15:43:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:31.351 15:43:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:39:34.634 15:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:34.634 15:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:39:34.634 15:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3296660 00:39:34.634 15:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:34.634 15:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3296660 00:39:36.010 { 00:39:36.010 "results": [ 00:39:36.010 { 00:39:36.010 "job": "NVMe0n1", 00:39:36.010 "core_mask": "0x1", 00:39:36.010 "workload": "verify", 00:39:36.010 "status": "finished", 00:39:36.010 "verify_range": { 00:39:36.010 "start": 0, 00:39:36.010 "length": 16384 00:39:36.011 }, 00:39:36.011 "queue_depth": 128, 00:39:36.011 "io_size": 4096, 00:39:36.011 "runtime": 1.00716, 00:39:36.011 "iops": 14996.624170936098, 00:39:36.011 "mibps": 58.580563167719134, 00:39:36.011 "io_failed": 0, 00:39:36.011 "io_timeout": 0, 00:39:36.011 "avg_latency_us": 8489.597170228444, 00:39:36.011 "min_latency_us": 3333.7878260869566, 00:39:36.011 "max_latency_us": 19831.76347826087 00:39:36.011 } 00:39:36.011 ], 00:39:36.011 "core_count": 1 00:39:36.011 } 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:36.011 [2024-11-06 15:43:56.395537] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:36.011 [2024-11-06 15:43:56.395643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3295922 ] 00:39:36.011 [2024-11-06 15:43:56.543466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.011 [2024-11-06 15:43:56.654132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.011 [2024-11-06 15:43:58.845553] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:39:36.011 [2024-11-06 15:43:58.846183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:39:36.011 [2024-11-06 15:43:58.846255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:39:36.011 [2024-11-06 15:43:58.876750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:39:36.011 [2024-11-06 15:43:58.901442] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:39:36.011 Running I/O for 1 seconds... 00:39:36.011 14976.00 IOPS, 58.50 MiB/s 00:39:36.011 Latency(us) 00:39:36.011 [2024-11-06T14:44:03.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:36.011 Verification LBA range: start 0x0 length 0x4000 00:39:36.011 NVMe0n1 : 1.01 14996.62 58.58 0.00 0.00 8489.60 3333.79 19831.76 00:39:36.011 [2024-11-06T14:44:03.646Z] =================================================================================================================== 00:39:36.011 [2024-11-06T14:44:03.646Z] Total : 14996.62 58.58 0.00 0.00 8489.60 3333.79 19831.76 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:39:36.011 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:36.269 15:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:36.528 15:44:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3295922 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3295922 ']' 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3295922 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3295922 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3295922' 00:39:39.812 killing process with pid 3295922 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3295922 00:39:39.812 15:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3295922 00:39:40.748 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:39:40.748 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:39:41.007 rmmod nvme_rdma 00:39:41.007 rmmod nvme_fabrics 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3293344 ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3293344 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3293344 ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3293344 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293344 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293344' 00:39:41.007 killing process with pid 3293344 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3293344 00:39:41.007 15:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3293344 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:39:42.911 00:39:42.911 real 0m41.572s 00:39:42.911 user 2m16.579s 00:39:42.911 sys 0m8.333s 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:42.911 ************************************ 00:39:42.911 END TEST nvmf_failover 00:39:42.911 ************************************ 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.911 ************************************ 00:39:42.911 START TEST nvmf_host_discovery 00:39:42.911 ************************************ 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:39:42.911 * Looking for test storage... 00:39:42.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:39:42.911 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.171 --rc genhtml_branch_coverage=1 00:39:43.171 --rc genhtml_function_coverage=1 00:39:43.171 --rc genhtml_legend=1 00:39:43.171 --rc geninfo_all_blocks=1 00:39:43.171 --rc geninfo_unexecuted_blocks=1 00:39:43.171 00:39:43.171 ' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.171 --rc genhtml_branch_coverage=1 00:39:43.171 --rc genhtml_function_coverage=1 00:39:43.171 --rc genhtml_legend=1 00:39:43.171 --rc geninfo_all_blocks=1 00:39:43.171 --rc geninfo_unexecuted_blocks=1 00:39:43.171 00:39:43.171 ' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.171 --rc genhtml_branch_coverage=1 00:39:43.171 --rc genhtml_function_coverage=1 00:39:43.171 --rc genhtml_legend=1 00:39:43.171 --rc geninfo_all_blocks=1 00:39:43.171 --rc geninfo_unexecuted_blocks=1 00:39:43.171 00:39:43.171 ' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:43.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.171 --rc genhtml_branch_coverage=1 00:39:43.171 --rc genhtml_function_coverage=1 00:39:43.171 --rc genhtml_legend=1 00:39:43.171 --rc geninfo_all_blocks=1 00:39:43.171 --rc geninfo_unexecuted_blocks=1 00:39:43.171 00:39:43.171 ' 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:39:43.171 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:43.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:39:43.172 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:39:43.172 00:39:43.172 real 0m0.232s 00:39:43.172 user 0m0.135s 00:39:43.172 sys 0m0.117s 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:43.172 ************************************ 00:39:43.172 END TEST nvmf_host_discovery 00:39:43.172 ************************************ 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.172 ************************************ 00:39:43.172 START TEST nvmf_host_multipath_status 00:39:43.172 ************************************ 00:39:43.172 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:39:43.432 * Looking for test storage... 00:39:43.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:43.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.432 --rc genhtml_branch_coverage=1 00:39:43.432 --rc genhtml_function_coverage=1 00:39:43.432 --rc genhtml_legend=1 00:39:43.432 --rc geninfo_all_blocks=1 00:39:43.432 --rc geninfo_unexecuted_blocks=1 00:39:43.432 00:39:43.432 ' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:43.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.432 --rc genhtml_branch_coverage=1 00:39:43.432 --rc genhtml_function_coverage=1 00:39:43.432 --rc genhtml_legend=1 00:39:43.432 --rc geninfo_all_blocks=1 00:39:43.432 --rc geninfo_unexecuted_blocks=1 00:39:43.432 00:39:43.432 ' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:43.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.432 --rc genhtml_branch_coverage=1 00:39:43.432 --rc genhtml_function_coverage=1 00:39:43.432 --rc genhtml_legend=1 00:39:43.432 --rc geninfo_all_blocks=1 00:39:43.432 --rc geninfo_unexecuted_blocks=1 00:39:43.432 00:39:43.432 ' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:43.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:43.432 --rc genhtml_branch_coverage=1 00:39:43.432 --rc genhtml_function_coverage=1 00:39:43.432 --rc genhtml_legend=1 00:39:43.432 --rc geninfo_all_blocks=1 00:39:43.432 --rc geninfo_unexecuted_blocks=1 00:39:43.432 00:39:43.432 ' 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:39:43.432 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:43.433 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:39:43.433 15:44:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:39:50.008 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:39:50.008 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.008 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:39:50.008 Found net devices under 0000:18:00.0: mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:39:50.268 Found net devices under 0000:18:00.1: mlx_0_1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:39:50.268 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:50.268 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:39:50.268 altname enp24s0f0np0 00:39:50.268 altname ens785f0np0 00:39:50.268 inet 192.168.100.8/24 scope global mlx_0_0 00:39:50.268 valid_lft forever preferred_lft forever 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:39:50.268 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:50.268 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:39:50.268 altname enp24s0f1np1 00:39:50.268 altname ens785f1np1 00:39:50.268 inet 192.168.100.9/24 scope global mlx_0_1 00:39:50.268 valid_lft forever preferred_lft forever 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:50.268 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:39:50.269 192.168.100.9' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:39:50.269 192.168.100.9' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:39:50.269 192.168.100.9' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:50.269 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3300713 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3300713 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3300713 ']' 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:50.528 15:44:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:50.528 [2024-11-06 15:44:17.997171] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:50.528 [2024-11-06 15:44:17.997280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.528 [2024-11-06 15:44:18.146217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:50.787 [2024-11-06 15:44:18.253827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.787 [2024-11-06 15:44:18.253876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.787 [2024-11-06 15:44:18.253891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.787 [2024-11-06 15:44:18.253904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.787 [2024-11-06 15:44:18.253914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.787 [2024-11-06 15:44:18.255742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.787 [2024-11-06 15:44:18.255769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3300713 00:39:51.354 15:44:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:39:51.613 [2024-11-06 15:44:19.055304] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f2b7b992940) succeed. 00:39:51.613 [2024-11-06 15:44:19.064632] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f2b7b94e940) succeed. 00:39:51.872 15:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:51.872 Malloc0 00:39:52.131 15:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:39:52.131 15:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:52.389 15:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:39:52.647 [2024-11-06 15:44:20.095610] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:39:52.647 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:39:52.905 [2024-11-06 15:44:20.308061] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3301067 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3301067 /var/tmp/bdevperf.sock 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3301067 ']' 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:52.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:52.905 15:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:53.838 15:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:53.838 15:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:39:53.838 15:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:39:53.838 15:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:39:54.096 Nvme0n1 00:39:54.354 15:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:39:54.612 Nvme0n1 00:39:54.612 15:44:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:39:54.612 15:44:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:39:56.512 15:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:39:56.512 15:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:39:56.769 15:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:39:57.027 15:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:39:57.963 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:39:57.963 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:39:57.963 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:57.963 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:58.222 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:58.222 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:58.222 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.222 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:58.481 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:58.481 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:58.481 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.481 15:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:58.481 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:58.481 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:58.481 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.481 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:58.739 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:58.739 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:58.739 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.739 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:58.998 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:58.998 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:58.998 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.998 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:59.257 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.257 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:39:59.257 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:39:59.257 15:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:39:59.516 15:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:00.892 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:01.151 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.151 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:01.151 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.151 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:01.410 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.410 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:01.410 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.410 15:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:01.668 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.668 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:01.668 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.668 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:01.927 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.927 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:01.927 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:40:01.927 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:40:02.192 15:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:03.129 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:03.129 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:03.129 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:03.129 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:03.387 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:03.387 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:03.388 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:03.388 15:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:03.645 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:03.645 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:03.645 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:03.645 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:03.903 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:03.903 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:03.903 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:03.903 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:04.162 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.162 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:04.162 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.162 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:04.421 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.421 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:04.421 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:04.421 15:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.421 15:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.421 15:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:04.421 15:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:40:04.679 15:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:40:04.937 15:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:05.871 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:05.871 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:05.871 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.871 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:06.129 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:06.129 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:06.129 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:06.129 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:06.388 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:06.388 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:06.388 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:06.388 15:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:06.646 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:06.905 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:06.905 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:06.905 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:06.905 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.163 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:07.163 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:07.163 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:40:07.422 15:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:40:07.680 15:44:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:08.614 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:08.614 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:08.614 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.614 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:08.873 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:09.131 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:09.390 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:09.390 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:09.390 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:09.390 15:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:09.648 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:09.648 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:09.648 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:09.648 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:09.907 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:09.907 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:09.907 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:40:09.907 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:40:10.165 15:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.541 15:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:11.541 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.541 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:11.541 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.541 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:11.800 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.800 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:11.800 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.800 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:12.058 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:12.058 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:12.058 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:12.059 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:12.318 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:12.318 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:12.318 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:12.318 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:12.576 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:12.576 15:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:12.576 15:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:12.576 15:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:40:12.834 15:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:40:13.098 15:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:14.152 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:14.152 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:14.152 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.152 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:14.412 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.412 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:14.412 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.412 15:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:14.412 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.412 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:14.412 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.412 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:14.671 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.671 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:14.671 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:14.671 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.930 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:14.930 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:14.930 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:14.930 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:15.189 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:15.189 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:15.189 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:15.189 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:15.448 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:15.448 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:15.448 15:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:40:15.707 15:44:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:40:15.707 15:44:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:17.088 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.347 15:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:17.607 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.607 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:17.607 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.607 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:17.866 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:17.866 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:17.866 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:17.866 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:18.126 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:18.126 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:18.126 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:40:18.385 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:40:18.385 15:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:40:19.766 15:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:40:19.766 15:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:19.766 15:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:19.766 15:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:19.766 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:19.766 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:19.766 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:19.766 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:20.025 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.025 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:20.025 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.025 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.285 15:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:20.545 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.545 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:20.545 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:20.545 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:20.804 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:20.804 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:40:20.804 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:40:21.063 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:40:21.063 15:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.442 15:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:22.701 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:22.701 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:22.701 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.701 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:22.960 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:23.220 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:23.220 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:23.220 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:23.220 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:23.480 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:23.480 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3301067 00:40:23.480 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3301067 ']' 00:40:23.480 15:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3301067 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3301067 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3301067' 00:40:23.480 killing process with pid 3301067 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3301067 00:40:23.480 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3301067 00:40:23.480 { 00:40:23.480 "results": [ 00:40:23.480 { 00:40:23.480 "job": "Nvme0n1", 00:40:23.480 "core_mask": "0x4", 00:40:23.480 "workload": "verify", 00:40:23.480 "status": "terminated", 00:40:23.480 "verify_range": { 00:40:23.480 "start": 0, 00:40:23.480 "length": 16384 00:40:23.480 }, 00:40:23.480 "queue_depth": 128, 00:40:23.480 "io_size": 4096, 00:40:23.480 "runtime": 28.898193, 00:40:23.480 "iops": 13673.55391390735, 00:40:23.480 "mibps": 53.412319976200585, 00:40:23.480 "io_failed": 0, 00:40:23.480 "io_timeout": 0, 00:40:23.480 "avg_latency_us": 9338.802723756397, 00:40:23.480 "min_latency_us": 79.24869565217391, 00:40:23.480 "max_latency_us": 3019898.88 00:40:23.480 } 00:40:23.480 ], 00:40:23.480 "core_count": 1 00:40:23.480 } 00:40:24.425 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3301067 00:40:24.425 15:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:24.425 [2024-11-06 15:44:20.424931] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:40:24.425 [2024-11-06 15:44:20.425039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301067 ] 00:40:24.425 [2024-11-06 15:44:20.577646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.425 [2024-11-06 15:44:20.688815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:24.425 Running I/O for 90 seconds... 00:40:24.425 15616.00 IOPS, 61.00 MiB/s [2024-11-06T14:44:52.060Z] 15744.00 IOPS, 61.50 MiB/s [2024-11-06T14:44:52.060Z] 15765.33 IOPS, 61.58 MiB/s [2024-11-06T14:44:52.060Z] 15776.00 IOPS, 61.62 MiB/s [2024-11-06T14:44:52.060Z] 15772.00 IOPS, 61.61 MiB/s [2024-11-06T14:44:52.060Z] 15801.83 IOPS, 61.73 MiB/s [2024-11-06T14:44:52.060Z] 15806.57 IOPS, 61.74 MiB/s [2024-11-06T14:44:52.060Z] 15792.88 IOPS, 61.69 MiB/s [2024-11-06T14:44:52.060Z] 15784.67 IOPS, 61.66 MiB/s [2024-11-06T14:44:52.060Z] 15759.80 IOPS, 61.56 MiB/s [2024-11-06T14:44:52.060Z] 15774.18 IOPS, 61.62 MiB/s [2024-11-06T14:44:52.060Z] 15781.58 IOPS, 61.65 MiB/s [2024-11-06T14:44:52.061Z] [2024-11-06 15:44:34.852315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.426 [2024-11-06 15:44:34.852382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.852973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.852989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.426 [2024-11-06 15:44:34.853362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x180d00 00:40:24.426 [2024-11-06 15:44:34.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x180d00 00:40:24.427 [2024-11-06 15:44:34.853852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.853884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.853913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.853944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.853974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.853990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.427 [2024-11-06 15:44:34.854476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.427 [2024-11-06 15:44:34.854491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x180d00 00:40:24.428 [2024-11-06 15:44:34.854776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x180d00 00:40:24.428 [2024-11-06 15:44:34.854869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x180d00 00:40:24.428 [2024-11-06 15:44:34.854901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.854980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.854995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.428 [2024-11-06 15:44:34.855962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.428 [2024-11-06 15:44:34.855983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.855998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.429 [2024-11-06 15:44:34.856674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.856974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.856995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:34.857315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:34.857330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.429 15443.69 IOPS, 60.33 MiB/s [2024-11-06T14:44:52.064Z] 14340.57 IOPS, 56.02 MiB/s [2024-11-06T14:44:52.064Z] 13384.53 IOPS, 52.28 MiB/s [2024-11-06T14:44:52.064Z] 12823.00 IOPS, 50.09 MiB/s [2024-11-06T14:44:52.064Z] 13003.59 IOPS, 50.80 MiB/s [2024-11-06T14:44:52.064Z] 13166.56 IOPS, 51.43 MiB/s [2024-11-06T14:44:52.064Z] 13205.26 IOPS, 51.58 MiB/s [2024-11-06T14:44:52.064Z] 13207.50 IOPS, 51.59 MiB/s [2024-11-06T14:44:52.064Z] 13216.86 IOPS, 51.63 MiB/s [2024-11-06T14:44:52.064Z] 13340.77 IOPS, 52.11 MiB/s [2024-11-06T14:44:52.064Z] 13450.78 IOPS, 52.54 MiB/s [2024-11-06T14:44:52.064Z] 13524.00 IOPS, 52.83 MiB/s [2024-11-06T14:44:52.064Z] 13516.28 IOPS, 52.80 MiB/s [2024-11-06T14:44:52.064Z] 13500.96 IOPS, 52.74 MiB/s [2024-11-06T14:44:52.064Z] [2024-11-06 15:44:48.663681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:48.663749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:48.663782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:48.663798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:48.663817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:48.663845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.429 [2024-11-06 15:44:48.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:40:24.429 [2024-11-06 15:44:48.663879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.663896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.663934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.663966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.663981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.663998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x180d00 00:40:24.430 [2024-11-06 15:44:48.664772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.664983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.664999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.430 [2024-11-06 15:44:48.665017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.430 [2024-11-06 15:44:48.665032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.665933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.665981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.665997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.666028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.431 [2024-11-06 15:44:48.666320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x180d00 00:40:24.431 [2024-11-06 15:44:48.666361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.431 [2024-11-06 15:44:48.666377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.666392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.666407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.666429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.666445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.668911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.668976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.668993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.432 [2024-11-06 15:44:48.669823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:24.432 [2024-11-06 15:44:48.669970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x180d00 00:40:24.432 [2024-11-06 15:44:48.669990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.670833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.670886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.670898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.672599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.672628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.672912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x180d00 00:40:24.433 [2024-11-06 15:44:48.672944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:24.433 [2024-11-06 15:44:48.672960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.433 [2024-11-06 15:44:48.672973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.672989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.673751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.673978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.673991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.674019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.674048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.674110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.674147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x180d00 00:40:24.434 [2024-11-06 15:44:48.674175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:24.434 [2024-11-06 15:44:48.674191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.434 [2024-11-06 15:44:48.674204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.674759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.674888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.674900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.676699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.676968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.676984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x180d00 00:40:24.435 [2024-11-06 15:44:48.677144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:24.435 [2024-11-06 15:44:48.677191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.435 [2024-11-06 15:44:48.677204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:24.436 [2024-11-06 15:44:48.677879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:24.436 [2024-11-06 15:44:48.677895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x180d00 00:40:24.436 [2024-11-06 15:44:48.677908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:24.436 13538.67 IOPS, 52.89 MiB/s [2024-11-06T14:44:52.071Z] 13619.57 IOPS, 53.20 MiB/s [2024-11-06T14:44:52.071Z] Received shutdown signal, test time was about 28.898932 seconds 00:40:24.436 00:40:24.436 Latency(us) 00:40:24.436 [2024-11-06T14:44:52.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.436 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:40:24.436 Verification LBA range: start 0x0 length 0x4000 00:40:24.436 Nvme0n1 : 28.90 13673.55 53.41 0.00 0.00 9338.80 79.25 3019898.88 00:40:24.436 [2024-11-06T14:44:52.071Z] =================================================================================================================== 00:40:24.436 [2024-11-06T14:44:52.071Z] Total : 13673.55 53.41 0.00 0.00 9338.80 79.25 3019898.88 00:40:24.436 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:40:24.696 rmmod nvme_rdma 00:40:24.696 rmmod nvme_fabrics 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3300713 ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3300713 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3300713 ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3300713 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3300713 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3300713' 00:40:24.696 killing process with pid 3300713 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3300713 00:40:24.696 15:44:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3300713 00:40:26.604 15:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:26.604 15:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:40:26.604 00:40:26.604 real 0m43.173s 00:40:26.604 user 2m1.049s 00:40:26.604 sys 0m9.961s 00:40:26.604 15:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:26.605 ************************************ 00:40:26.605 END TEST nvmf_host_multipath_status 00:40:26.605 ************************************ 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:26.605 ************************************ 00:40:26.605 START TEST nvmf_discovery_remove_ifc 00:40:26.605 ************************************ 00:40:26.605 15:44:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:40:26.605 * Looking for test storage... 00:40:26.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:26.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.605 --rc genhtml_branch_coverage=1 00:40:26.605 --rc genhtml_function_coverage=1 00:40:26.605 --rc genhtml_legend=1 00:40:26.605 --rc geninfo_all_blocks=1 00:40:26.605 --rc geninfo_unexecuted_blocks=1 00:40:26.605 00:40:26.605 ' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:26.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.605 --rc genhtml_branch_coverage=1 00:40:26.605 --rc genhtml_function_coverage=1 00:40:26.605 --rc genhtml_legend=1 00:40:26.605 --rc geninfo_all_blocks=1 00:40:26.605 --rc geninfo_unexecuted_blocks=1 00:40:26.605 00:40:26.605 ' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:26.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.605 --rc genhtml_branch_coverage=1 00:40:26.605 --rc genhtml_function_coverage=1 00:40:26.605 --rc genhtml_legend=1 00:40:26.605 --rc geninfo_all_blocks=1 00:40:26.605 --rc geninfo_unexecuted_blocks=1 00:40:26.605 00:40:26.605 ' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:26.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.605 --rc genhtml_branch_coverage=1 00:40:26.605 --rc genhtml_function_coverage=1 00:40:26.605 --rc genhtml_legend=1 00:40:26.605 --rc geninfo_all_blocks=1 00:40:26.605 --rc geninfo_unexecuted_blocks=1 00:40:26.605 00:40:26.605 ' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:40:26.605 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:26.606 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:40:26.606 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:40:26.606 00:40:26.606 real 0m0.238s 00:40:26.606 user 0m0.132s 00:40:26.606 sys 0m0.125s 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:26.606 15:44:54 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:26.606 ************************************ 00:40:26.606 END TEST nvmf_discovery_remove_ifc 00:40:26.606 ************************************ 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:26.865 ************************************ 00:40:26.865 START TEST nvmf_identify_kernel_target 00:40:26.865 ************************************ 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:40:26.865 * Looking for test storage... 00:40:26.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:40:26.865 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:27.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.125 --rc genhtml_branch_coverage=1 00:40:27.125 --rc genhtml_function_coverage=1 00:40:27.125 --rc genhtml_legend=1 00:40:27.125 --rc geninfo_all_blocks=1 00:40:27.125 --rc geninfo_unexecuted_blocks=1 00:40:27.125 00:40:27.125 ' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:27.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.125 --rc genhtml_branch_coverage=1 00:40:27.125 --rc genhtml_function_coverage=1 00:40:27.125 --rc genhtml_legend=1 00:40:27.125 --rc geninfo_all_blocks=1 00:40:27.125 --rc geninfo_unexecuted_blocks=1 00:40:27.125 00:40:27.125 ' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:27.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.125 --rc genhtml_branch_coverage=1 00:40:27.125 --rc genhtml_function_coverage=1 00:40:27.125 --rc genhtml_legend=1 00:40:27.125 --rc geninfo_all_blocks=1 00:40:27.125 --rc geninfo_unexecuted_blocks=1 00:40:27.125 00:40:27.125 ' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:27.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.125 --rc genhtml_branch_coverage=1 00:40:27.125 --rc genhtml_function_coverage=1 00:40:27.125 --rc genhtml_legend=1 00:40:27.125 --rc geninfo_all_blocks=1 00:40:27.125 --rc geninfo_unexecuted_blocks=1 00:40:27.125 00:40:27.125 ' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.125 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.126 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:27.126 15:44:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:40:33.703 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:40:33.703 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.703 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:40:33.704 Found net devices under 0000:18:00.0: mlx_0_0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:40:33.704 Found net devices under 0000:18:00.1: mlx_0_1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:40:33.704 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:33.704 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:40:33.704 altname enp24s0f0np0 00:40:33.704 altname ens785f0np0 00:40:33.704 inet 192.168.100.8/24 scope global mlx_0_0 00:40:33.704 valid_lft forever preferred_lft forever 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:40:33.704 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:33.704 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:40:33.704 altname enp24s0f1np1 00:40:33.704 altname ens785f1np1 00:40:33.704 inet 192.168.100.9/24 scope global mlx_0_1 00:40:33.704 valid_lft forever preferred_lft forever 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:33.704 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:40:33.965 192.168.100.9' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:40:33.965 192.168.100.9' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:40:33.965 192.168.100.9' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:33.965 15:45:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:40:37.257 Waiting for block devices as requested 00:40:37.257 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:40:37.257 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:37.517 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:37.517 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:37.517 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:37.776 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:37.776 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:37.776 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:38.036 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:38.036 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:38.036 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:38.295 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:38.295 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:38.295 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:38.556 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:38.556 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:38.556 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:38.815 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:38.816 No valid GPT data, bailing 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:38.816 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:40:39.076 00:40:39.076 Discovery Log Number of Records 2, Generation counter 2 00:40:39.076 =====Discovery Log Entry 0====== 00:40:39.076 trtype: rdma 00:40:39.076 adrfam: ipv4 00:40:39.076 subtype: current discovery subsystem 00:40:39.076 treq: not specified, sq flow control disable supported 00:40:39.076 portid: 1 00:40:39.076 trsvcid: 4420 00:40:39.076 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:39.076 traddr: 192.168.100.8 00:40:39.076 eflags: none 00:40:39.076 rdma_prtype: not specified 00:40:39.076 rdma_qptype: connected 00:40:39.076 rdma_cms: rdma-cm 00:40:39.076 rdma_pkey: 0x0000 00:40:39.076 =====Discovery Log Entry 1====== 00:40:39.076 trtype: rdma 00:40:39.076 adrfam: ipv4 00:40:39.076 subtype: nvme subsystem 00:40:39.076 treq: not specified, sq flow control disable supported 00:40:39.076 portid: 1 00:40:39.076 trsvcid: 4420 00:40:39.076 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:39.076 traddr: 192.168.100.8 00:40:39.076 eflags: none 00:40:39.076 rdma_prtype: not specified 00:40:39.076 rdma_qptype: connected 00:40:39.076 rdma_cms: rdma-cm 00:40:39.076 rdma_pkey: 0x0000 00:40:39.076 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:40:39.076 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:40:39.076 ===================================================== 00:40:39.076 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:40:39.076 ===================================================== 00:40:39.076 Controller Capabilities/Features 00:40:39.076 ================================ 00:40:39.076 Vendor ID: 0000 00:40:39.076 Subsystem Vendor ID: 0000 00:40:39.076 Serial Number: d45bf27c440da7902364 00:40:39.076 Model Number: Linux 00:40:39.076 Firmware Version: 6.8.9-20 00:40:39.076 Recommended Arb Burst: 0 00:40:39.076 IEEE OUI Identifier: 00 00 00 00:40:39.076 Multi-path I/O 00:40:39.076 May have multiple subsystem ports: No 00:40:39.076 May have multiple controllers: No 00:40:39.076 Associated with SR-IOV VF: No 00:40:39.076 Max Data Transfer Size: Unlimited 00:40:39.076 Max Number of Namespaces: 0 00:40:39.076 Max Number of I/O Queues: 1024 00:40:39.076 NVMe Specification Version (VS): 1.3 00:40:39.076 NVMe Specification Version (Identify): 1.3 00:40:39.076 Maximum Queue Entries: 128 00:40:39.076 Contiguous Queues Required: No 00:40:39.076 Arbitration Mechanisms Supported 00:40:39.076 Weighted Round Robin: Not Supported 00:40:39.076 Vendor Specific: Not Supported 00:40:39.076 Reset Timeout: 7500 ms 00:40:39.076 Doorbell Stride: 4 bytes 00:40:39.076 NVM Subsystem Reset: Not Supported 00:40:39.076 Command Sets Supported 00:40:39.076 NVM Command Set: Supported 00:40:39.076 Boot Partition: Not Supported 00:40:39.076 Memory Page Size Minimum: 4096 bytes 00:40:39.076 Memory Page Size Maximum: 4096 bytes 00:40:39.076 Persistent Memory Region: Not Supported 00:40:39.076 Optional Asynchronous Events Supported 00:40:39.076 Namespace Attribute Notices: Not Supported 00:40:39.076 Firmware Activation Notices: Not Supported 00:40:39.076 ANA Change Notices: Not Supported 00:40:39.076 PLE Aggregate Log Change Notices: Not Supported 00:40:39.076 LBA Status Info Alert Notices: Not Supported 00:40:39.076 EGE Aggregate Log Change Notices: Not Supported 00:40:39.076 Normal NVM Subsystem Shutdown event: Not Supported 00:40:39.076 Zone Descriptor Change Notices: Not Supported 00:40:39.076 Discovery Log Change Notices: Supported 00:40:39.076 Controller Attributes 00:40:39.076 128-bit Host Identifier: Not Supported 00:40:39.076 Non-Operational Permissive Mode: Not Supported 00:40:39.076 NVM Sets: Not Supported 00:40:39.076 Read Recovery Levels: Not Supported 00:40:39.076 Endurance Groups: Not Supported 00:40:39.076 Predictable Latency Mode: Not Supported 00:40:39.076 Traffic Based Keep ALive: Not Supported 00:40:39.076 Namespace Granularity: Not Supported 00:40:39.076 SQ Associations: Not Supported 00:40:39.076 UUID List: Not Supported 00:40:39.076 Multi-Domain Subsystem: Not Supported 00:40:39.076 Fixed Capacity Management: Not Supported 00:40:39.076 Variable Capacity Management: Not Supported 00:40:39.076 Delete Endurance Group: Not Supported 00:40:39.076 Delete NVM Set: Not Supported 00:40:39.076 Extended LBA Formats Supported: Not Supported 00:40:39.076 Flexible Data Placement Supported: Not Supported 00:40:39.076 00:40:39.076 Controller Memory Buffer Support 00:40:39.076 ================================ 00:40:39.076 Supported: No 00:40:39.076 00:40:39.076 Persistent Memory Region Support 00:40:39.076 ================================ 00:40:39.076 Supported: No 00:40:39.076 00:40:39.076 Admin Command Set Attributes 00:40:39.076 ============================ 00:40:39.076 Security Send/Receive: Not Supported 00:40:39.076 Format NVM: Not Supported 00:40:39.076 Firmware Activate/Download: Not Supported 00:40:39.076 Namespace Management: Not Supported 00:40:39.076 Device Self-Test: Not Supported 00:40:39.076 Directives: Not Supported 00:40:39.076 NVMe-MI: Not Supported 00:40:39.076 Virtualization Management: Not Supported 00:40:39.076 Doorbell Buffer Config: Not Supported 00:40:39.076 Get LBA Status Capability: Not Supported 00:40:39.076 Command & Feature Lockdown Capability: Not Supported 00:40:39.076 Abort Command Limit: 1 00:40:39.076 Async Event Request Limit: 1 00:40:39.076 Number of Firmware Slots: N/A 00:40:39.076 Firmware Slot 1 Read-Only: N/A 00:40:39.076 Firmware Activation Without Reset: N/A 00:40:39.076 Multiple Update Detection Support: N/A 00:40:39.076 Firmware Update Granularity: No Information Provided 00:40:39.076 Per-Namespace SMART Log: No 00:40:39.076 Asymmetric Namespace Access Log Page: Not Supported 00:40:39.076 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:40:39.076 Command Effects Log Page: Not Supported 00:40:39.076 Get Log Page Extended Data: Supported 00:40:39.076 Telemetry Log Pages: Not Supported 00:40:39.076 Persistent Event Log Pages: Not Supported 00:40:39.076 Supported Log Pages Log Page: May Support 00:40:39.076 Commands Supported & Effects Log Page: Not Supported 00:40:39.076 Feature Identifiers & Effects Log Page:May Support 00:40:39.076 NVMe-MI Commands & Effects Log Page: May Support 00:40:39.076 Data Area 4 for Telemetry Log: Not Supported 00:40:39.076 Error Log Page Entries Supported: 1 00:40:39.076 Keep Alive: Not Supported 00:40:39.076 00:40:39.076 NVM Command Set Attributes 00:40:39.076 ========================== 00:40:39.076 Submission Queue Entry Size 00:40:39.076 Max: 1 00:40:39.076 Min: 1 00:40:39.076 Completion Queue Entry Size 00:40:39.076 Max: 1 00:40:39.076 Min: 1 00:40:39.076 Number of Namespaces: 0 00:40:39.076 Compare Command: Not Supported 00:40:39.076 Write Uncorrectable Command: Not Supported 00:40:39.076 Dataset Management Command: Not Supported 00:40:39.076 Write Zeroes Command: Not Supported 00:40:39.076 Set Features Save Field: Not Supported 00:40:39.076 Reservations: Not Supported 00:40:39.076 Timestamp: Not Supported 00:40:39.076 Copy: Not Supported 00:40:39.076 Volatile Write Cache: Not Present 00:40:39.076 Atomic Write Unit (Normal): 1 00:40:39.076 Atomic Write Unit (PFail): 1 00:40:39.076 Atomic Compare & Write Unit: 1 00:40:39.076 Fused Compare & Write: Not Supported 00:40:39.076 Scatter-Gather List 00:40:39.076 SGL Command Set: Supported 00:40:39.076 SGL Keyed: Supported 00:40:39.076 SGL Bit Bucket Descriptor: Not Supported 00:40:39.076 SGL Metadata Pointer: Not Supported 00:40:39.076 Oversized SGL: Not Supported 00:40:39.076 SGL Metadata Address: Not Supported 00:40:39.076 SGL Offset: Supported 00:40:39.076 Transport SGL Data Block: Not Supported 00:40:39.076 Replay Protected Memory Block: Not Supported 00:40:39.076 00:40:39.076 Firmware Slot Information 00:40:39.076 ========================= 00:40:39.077 Active slot: 0 00:40:39.077 00:40:39.077 00:40:39.077 Error Log 00:40:39.077 ========= 00:40:39.077 00:40:39.077 Active Namespaces 00:40:39.077 ================= 00:40:39.077 Discovery Log Page 00:40:39.077 ================== 00:40:39.077 Generation Counter: 2 00:40:39.077 Number of Records: 2 00:40:39.077 Record Format: 0 00:40:39.077 00:40:39.077 Discovery Log Entry 0 00:40:39.077 ---------------------- 00:40:39.077 Transport Type: 1 (RDMA) 00:40:39.077 Address Family: 1 (IPv4) 00:40:39.077 Subsystem Type: 3 (Current Discovery Subsystem) 00:40:39.077 Entry Flags: 00:40:39.077 Duplicate Returned Information: 0 00:40:39.077 Explicit Persistent Connection Support for Discovery: 0 00:40:39.077 Transport Requirements: 00:40:39.077 Secure Channel: Not Specified 00:40:39.077 Port ID: 1 (0x0001) 00:40:39.077 Controller ID: 65535 (0xffff) 00:40:39.077 Admin Max SQ Size: 32 00:40:39.077 Transport Service Identifier: 4420 00:40:39.077 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:40:39.077 Transport Address: 192.168.100.8 00:40:39.077 Transport Specific Address Subtype - RDMA 00:40:39.077 RDMA QP Service Type: 1 (Reliable Connected) 00:40:39.077 RDMA Provider Type: 1 (No provider specified) 00:40:39.077 RDMA CM Service: 1 (RDMA_CM) 00:40:39.077 Discovery Log Entry 1 00:40:39.077 ---------------------- 00:40:39.077 Transport Type: 1 (RDMA) 00:40:39.077 Address Family: 1 (IPv4) 00:40:39.077 Subsystem Type: 2 (NVM Subsystem) 00:40:39.077 Entry Flags: 00:40:39.077 Duplicate Returned Information: 0 00:40:39.077 Explicit Persistent Connection Support for Discovery: 0 00:40:39.077 Transport Requirements: 00:40:39.077 Secure Channel: Not Specified 00:40:39.077 Port ID: 1 (0x0001) 00:40:39.077 Controller ID: 65535 (0xffff) 00:40:39.077 Admin Max SQ Size: 32 00:40:39.077 Transport Service Identifier: 4420 00:40:39.077 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:40:39.077 Transport Address: 192.168.100.8 00:40:39.077 Transport Specific Address Subtype - RDMA 00:40:39.077 RDMA QP Service Type: 1 (Reliable Connected) 00:40:39.337 RDMA Provider Type: 1 (No provider specified) 00:40:39.337 RDMA CM Service: 1 (RDMA_CM) 00:40:39.337 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:39.337 get_feature(0x01) failed 00:40:39.337 get_feature(0x02) failed 00:40:39.337 get_feature(0x04) failed 00:40:39.337 ===================================================== 00:40:39.337 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:40:39.337 ===================================================== 00:40:39.337 Controller Capabilities/Features 00:40:39.337 ================================ 00:40:39.337 Vendor ID: 0000 00:40:39.337 Subsystem Vendor ID: 0000 00:40:39.337 Serial Number: 682f3f9d73e1d694ede4 00:40:39.337 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:40:39.337 Firmware Version: 6.8.9-20 00:40:39.337 Recommended Arb Burst: 6 00:40:39.337 IEEE OUI Identifier: 00 00 00 00:40:39.337 Multi-path I/O 00:40:39.337 May have multiple subsystem ports: Yes 00:40:39.337 May have multiple controllers: Yes 00:40:39.337 Associated with SR-IOV VF: No 00:40:39.337 Max Data Transfer Size: 1048576 00:40:39.337 Max Number of Namespaces: 1024 00:40:39.337 Max Number of I/O Queues: 128 00:40:39.337 NVMe Specification Version (VS): 1.3 00:40:39.337 NVMe Specification Version (Identify): 1.3 00:40:39.337 Maximum Queue Entries: 128 00:40:39.337 Contiguous Queues Required: No 00:40:39.337 Arbitration Mechanisms Supported 00:40:39.337 Weighted Round Robin: Not Supported 00:40:39.337 Vendor Specific: Not Supported 00:40:39.337 Reset Timeout: 7500 ms 00:40:39.337 Doorbell Stride: 4 bytes 00:40:39.337 NVM Subsystem Reset: Not Supported 00:40:39.337 Command Sets Supported 00:40:39.337 NVM Command Set: Supported 00:40:39.337 Boot Partition: Not Supported 00:40:39.337 Memory Page Size Minimum: 4096 bytes 00:40:39.337 Memory Page Size Maximum: 4096 bytes 00:40:39.337 Persistent Memory Region: Not Supported 00:40:39.337 Optional Asynchronous Events Supported 00:40:39.337 Namespace Attribute Notices: Supported 00:40:39.337 Firmware Activation Notices: Not Supported 00:40:39.337 ANA Change Notices: Supported 00:40:39.337 PLE Aggregate Log Change Notices: Not Supported 00:40:39.337 LBA Status Info Alert Notices: Not Supported 00:40:39.337 EGE Aggregate Log Change Notices: Not Supported 00:40:39.337 Normal NVM Subsystem Shutdown event: Not Supported 00:40:39.337 Zone Descriptor Change Notices: Not Supported 00:40:39.337 Discovery Log Change Notices: Not Supported 00:40:39.337 Controller Attributes 00:40:39.337 128-bit Host Identifier: Supported 00:40:39.337 Non-Operational Permissive Mode: Not Supported 00:40:39.337 NVM Sets: Not Supported 00:40:39.337 Read Recovery Levels: Not Supported 00:40:39.337 Endurance Groups: Not Supported 00:40:39.337 Predictable Latency Mode: Not Supported 00:40:39.337 Traffic Based Keep ALive: Supported 00:40:39.337 Namespace Granularity: Not Supported 00:40:39.337 SQ Associations: Not Supported 00:40:39.337 UUID List: Not Supported 00:40:39.337 Multi-Domain Subsystem: Not Supported 00:40:39.337 Fixed Capacity Management: Not Supported 00:40:39.337 Variable Capacity Management: Not Supported 00:40:39.337 Delete Endurance Group: Not Supported 00:40:39.337 Delete NVM Set: Not Supported 00:40:39.337 Extended LBA Formats Supported: Not Supported 00:40:39.337 Flexible Data Placement Supported: Not Supported 00:40:39.337 00:40:39.337 Controller Memory Buffer Support 00:40:39.337 ================================ 00:40:39.337 Supported: No 00:40:39.337 00:40:39.337 Persistent Memory Region Support 00:40:39.337 ================================ 00:40:39.337 Supported: No 00:40:39.337 00:40:39.337 Admin Command Set Attributes 00:40:39.338 ============================ 00:40:39.338 Security Send/Receive: Not Supported 00:40:39.338 Format NVM: Not Supported 00:40:39.338 Firmware Activate/Download: Not Supported 00:40:39.338 Namespace Management: Not Supported 00:40:39.338 Device Self-Test: Not Supported 00:40:39.338 Directives: Not Supported 00:40:39.338 NVMe-MI: Not Supported 00:40:39.338 Virtualization Management: Not Supported 00:40:39.338 Doorbell Buffer Config: Not Supported 00:40:39.338 Get LBA Status Capability: Not Supported 00:40:39.338 Command & Feature Lockdown Capability: Not Supported 00:40:39.338 Abort Command Limit: 4 00:40:39.338 Async Event Request Limit: 4 00:40:39.338 Number of Firmware Slots: N/A 00:40:39.338 Firmware Slot 1 Read-Only: N/A 00:40:39.338 Firmware Activation Without Reset: N/A 00:40:39.338 Multiple Update Detection Support: N/A 00:40:39.338 Firmware Update Granularity: No Information Provided 00:40:39.338 Per-Namespace SMART Log: Yes 00:40:39.338 Asymmetric Namespace Access Log Page: Supported 00:40:39.338 ANA Transition Time : 10 sec 00:40:39.338 00:40:39.338 Asymmetric Namespace Access Capabilities 00:40:39.338 ANA Optimized State : Supported 00:40:39.338 ANA Non-Optimized State : Supported 00:40:39.338 ANA Inaccessible State : Supported 00:40:39.338 ANA Persistent Loss State : Supported 00:40:39.338 ANA Change State : Supported 00:40:39.338 ANAGRPID is not changed : No 00:40:39.338 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:40:39.338 00:40:39.338 ANA Group Identifier Maximum : 128 00:40:39.338 Number of ANA Group Identifiers : 128 00:40:39.338 Max Number of Allowed Namespaces : 1024 00:40:39.338 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:40:39.338 Command Effects Log Page: Supported 00:40:39.338 Get Log Page Extended Data: Supported 00:40:39.338 Telemetry Log Pages: Not Supported 00:40:39.338 Persistent Event Log Pages: Not Supported 00:40:39.338 Supported Log Pages Log Page: May Support 00:40:39.338 Commands Supported & Effects Log Page: Not Supported 00:40:39.338 Feature Identifiers & Effects Log Page:May Support 00:40:39.338 NVMe-MI Commands & Effects Log Page: May Support 00:40:39.338 Data Area 4 for Telemetry Log: Not Supported 00:40:39.338 Error Log Page Entries Supported: 128 00:40:39.338 Keep Alive: Supported 00:40:39.338 Keep Alive Granularity: 1000 ms 00:40:39.338 00:40:39.338 NVM Command Set Attributes 00:40:39.338 ========================== 00:40:39.338 Submission Queue Entry Size 00:40:39.338 Max: 64 00:40:39.338 Min: 64 00:40:39.338 Completion Queue Entry Size 00:40:39.338 Max: 16 00:40:39.338 Min: 16 00:40:39.338 Number of Namespaces: 1024 00:40:39.338 Compare Command: Not Supported 00:40:39.338 Write Uncorrectable Command: Not Supported 00:40:39.338 Dataset Management Command: Supported 00:40:39.338 Write Zeroes Command: Supported 00:40:39.338 Set Features Save Field: Not Supported 00:40:39.338 Reservations: Not Supported 00:40:39.338 Timestamp: Not Supported 00:40:39.338 Copy: Not Supported 00:40:39.338 Volatile Write Cache: Present 00:40:39.338 Atomic Write Unit (Normal): 1 00:40:39.338 Atomic Write Unit (PFail): 1 00:40:39.338 Atomic Compare & Write Unit: 1 00:40:39.338 Fused Compare & Write: Not Supported 00:40:39.338 Scatter-Gather List 00:40:39.338 SGL Command Set: Supported 00:40:39.338 SGL Keyed: Supported 00:40:39.338 SGL Bit Bucket Descriptor: Not Supported 00:40:39.338 SGL Metadata Pointer: Not Supported 00:40:39.338 Oversized SGL: Not Supported 00:40:39.338 SGL Metadata Address: Not Supported 00:40:39.338 SGL Offset: Supported 00:40:39.338 Transport SGL Data Block: Not Supported 00:40:39.338 Replay Protected Memory Block: Not Supported 00:40:39.338 00:40:39.338 Firmware Slot Information 00:40:39.338 ========================= 00:40:39.338 Active slot: 0 00:40:39.338 00:40:39.338 Asymmetric Namespace Access 00:40:39.338 =========================== 00:40:39.338 Change Count : 0 00:40:39.338 Number of ANA Group Descriptors : 1 00:40:39.338 ANA Group Descriptor : 0 00:40:39.338 ANA Group ID : 1 00:40:39.338 Number of NSID Values : 1 00:40:39.338 Change Count : 0 00:40:39.338 ANA State : 1 00:40:39.338 Namespace Identifier : 1 00:40:39.338 00:40:39.338 Commands Supported and Effects 00:40:39.338 ============================== 00:40:39.338 Admin Commands 00:40:39.338 -------------- 00:40:39.338 Get Log Page (02h): Supported 00:40:39.338 Identify (06h): Supported 00:40:39.338 Abort (08h): Supported 00:40:39.338 Set Features (09h): Supported 00:40:39.338 Get Features (0Ah): Supported 00:40:39.338 Asynchronous Event Request (0Ch): Supported 00:40:39.338 Keep Alive (18h): Supported 00:40:39.338 I/O Commands 00:40:39.338 ------------ 00:40:39.338 Flush (00h): Supported 00:40:39.338 Write (01h): Supported LBA-Change 00:40:39.338 Read (02h): Supported 00:40:39.338 Write Zeroes (08h): Supported LBA-Change 00:40:39.338 Dataset Management (09h): Supported 00:40:39.338 00:40:39.338 Error Log 00:40:39.338 ========= 00:40:39.338 Entry: 0 00:40:39.338 Error Count: 0x3 00:40:39.338 Submission Queue Id: 0x0 00:40:39.338 Command Id: 0x5 00:40:39.338 Phase Bit: 0 00:40:39.338 Status Code: 0x2 00:40:39.338 Status Code Type: 0x0 00:40:39.338 Do Not Retry: 1 00:40:39.598 Error Location: 0x28 00:40:39.598 LBA: 0x0 00:40:39.598 Namespace: 0x0 00:40:39.598 Vendor Log Page: 0x0 00:40:39.598 ----------- 00:40:39.598 Entry: 1 00:40:39.598 Error Count: 0x2 00:40:39.598 Submission Queue Id: 0x0 00:40:39.598 Command Id: 0x5 00:40:39.598 Phase Bit: 0 00:40:39.598 Status Code: 0x2 00:40:39.598 Status Code Type: 0x0 00:40:39.598 Do Not Retry: 1 00:40:39.598 Error Location: 0x28 00:40:39.598 LBA: 0x0 00:40:39.598 Namespace: 0x0 00:40:39.598 Vendor Log Page: 0x0 00:40:39.598 ----------- 00:40:39.598 Entry: 2 00:40:39.598 Error Count: 0x1 00:40:39.598 Submission Queue Id: 0x0 00:40:39.598 Command Id: 0x0 00:40:39.598 Phase Bit: 0 00:40:39.598 Status Code: 0x2 00:40:39.598 Status Code Type: 0x0 00:40:39.598 Do Not Retry: 1 00:40:39.598 Error Location: 0x28 00:40:39.598 LBA: 0x0 00:40:39.598 Namespace: 0x0 00:40:39.598 Vendor Log Page: 0x0 00:40:39.598 00:40:39.598 Number of Queues 00:40:39.598 ================ 00:40:39.598 Number of I/O Submission Queues: 128 00:40:39.598 Number of I/O Completion Queues: 128 00:40:39.598 00:40:39.598 ZNS Specific Controller Data 00:40:39.598 ============================ 00:40:39.598 Zone Append Size Limit: 0 00:40:39.598 00:40:39.598 00:40:39.598 Active Namespaces 00:40:39.598 ================= 00:40:39.598 get_feature(0x05) failed 00:40:39.598 Namespace ID:1 00:40:39.598 Command Set Identifier: NVM (00h) 00:40:39.598 Deallocate: Supported 00:40:39.598 Deallocated/Unwritten Error: Not Supported 00:40:39.598 Deallocated Read Value: Unknown 00:40:39.598 Deallocate in Write Zeroes: Not Supported 00:40:39.598 Deallocated Guard Field: 0xFFFF 00:40:39.598 Flush: Supported 00:40:39.598 Reservation: Not Supported 00:40:39.598 Namespace Sharing Capabilities: Multiple Controllers 00:40:39.598 Size (in LBAs): 15628053168 (7452GiB) 00:40:39.598 Capacity (in LBAs): 15628053168 (7452GiB) 00:40:39.598 Utilization (in LBAs): 15628053168 (7452GiB) 00:40:39.598 UUID: 9501fac6-4794-402f-b435-688101a28a39 00:40:39.598 Thin Provisioning: Not Supported 00:40:39.598 Per-NS Atomic Units: Yes 00:40:39.598 Atomic Boundary Size (Normal): 0 00:40:39.598 Atomic Boundary Size (PFail): 0 00:40:39.598 Atomic Boundary Offset: 0 00:40:39.598 NGUID/EUI64 Never Reused: No 00:40:39.598 ANA group ID: 1 00:40:39.598 Namespace Write Protected: No 00:40:39.598 Number of LBA Formats: 1 00:40:39.598 Current LBA Format: LBA Format #00 00:40:39.598 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:39.598 00:40:39.599 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:40:39.599 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:39.599 15:45:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:40:39.599 rmmod nvme_rdma 00:40:39.599 rmmod nvme_fabrics 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:40:39.599 15:45:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:40:42.896 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:42.896 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:43.155 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:48.434 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:40:48.434 00:40:48.434 real 0m21.255s 00:40:48.434 user 0m4.922s 00:40:48.434 sys 0m10.730s 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:40:48.434 ************************************ 00:40:48.434 END TEST nvmf_identify_kernel_target 00:40:48.434 ************************************ 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.434 ************************************ 00:40:48.434 START TEST nvmf_auth_host 00:40:48.434 ************************************ 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:40:48.434 * Looking for test storage... 00:40:48.434 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:48.434 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:48.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.435 --rc genhtml_branch_coverage=1 00:40:48.435 --rc genhtml_function_coverage=1 00:40:48.435 --rc genhtml_legend=1 00:40:48.435 --rc geninfo_all_blocks=1 00:40:48.435 --rc geninfo_unexecuted_blocks=1 00:40:48.435 00:40:48.435 ' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:48.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.435 --rc genhtml_branch_coverage=1 00:40:48.435 --rc genhtml_function_coverage=1 00:40:48.435 --rc genhtml_legend=1 00:40:48.435 --rc geninfo_all_blocks=1 00:40:48.435 --rc geninfo_unexecuted_blocks=1 00:40:48.435 00:40:48.435 ' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:48.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.435 --rc genhtml_branch_coverage=1 00:40:48.435 --rc genhtml_function_coverage=1 00:40:48.435 --rc genhtml_legend=1 00:40:48.435 --rc geninfo_all_blocks=1 00:40:48.435 --rc geninfo_unexecuted_blocks=1 00:40:48.435 00:40:48.435 ' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:48.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.435 --rc genhtml_branch_coverage=1 00:40:48.435 --rc genhtml_function_coverage=1 00:40:48.435 --rc genhtml_legend=1 00:40:48.435 --rc geninfo_all_blocks=1 00:40:48.435 --rc geninfo_unexecuted_blocks=1 00:40:48.435 00:40:48.435 ' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:48.435 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:40:48.435 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:40:48.436 15:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:40:55.010 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:40:55.010 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:40:55.010 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:40:55.011 Found net devices under 0000:18:00.0: mlx_0_0 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:40:55.011 Found net devices under 0000:18:00.1: mlx_0_1 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:40:55.011 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.271 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:40:55.272 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:55.272 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:40:55.272 altname enp24s0f0np0 00:40:55.272 altname ens785f0np0 00:40:55.272 inet 192.168.100.8/24 scope global mlx_0_0 00:40:55.272 valid_lft forever preferred_lft forever 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:40:55.272 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:55.272 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:40:55.272 altname enp24s0f1np1 00:40:55.272 altname ens785f1np1 00:40:55.272 inet 192.168.100.9/24 scope global mlx_0_1 00:40:55.272 valid_lft forever preferred_lft forever 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:40:55.272 192.168.100.9' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:40:55.272 192.168.100.9' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:40:55.272 192.168.100.9' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3314748 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3314748 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3314748 ']' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:55.272 15:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=753d96a4655c5e202769873725b742e8 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Krt 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 753d96a4655c5e202769873725b742e8 0 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 753d96a4655c5e202769873725b742e8 0 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=753d96a4655c5e202769873725b742e8 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:40:56.210 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Krt 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Krt 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Krt 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9131ee29ebc94556a27e8ad97e8770d6dd301288accbc0aedeb23c705b6368b5 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Rwx 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9131ee29ebc94556a27e8ad97e8770d6dd301288accbc0aedeb23c705b6368b5 3 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9131ee29ebc94556a27e8ad97e8770d6dd301288accbc0aedeb23c705b6368b5 3 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9131ee29ebc94556a27e8ad97e8770d6dd301288accbc0aedeb23c705b6368b5 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Rwx 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Rwx 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Rwx 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7709e772db25c7601ece1a5cce20ca4640abd15305f7197f 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eTG 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7709e772db25c7601ece1a5cce20ca4640abd15305f7197f 0 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7709e772db25c7601ece1a5cce20ca4640abd15305f7197f 0 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7709e772db25c7601ece1a5cce20ca4640abd15305f7197f 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eTG 00:40:56.470 15:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eTG 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eTG 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e6032cd4c471a3a38908defbf95aa7e3c2d7306ac62a48f5 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.u93 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e6032cd4c471a3a38908defbf95aa7e3c2d7306ac62a48f5 2 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e6032cd4c471a3a38908defbf95aa7e3c2d7306ac62a48f5 2 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e6032cd4c471a3a38908defbf95aa7e3c2d7306ac62a48f5 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.u93 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.u93 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.u93 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=97dfb2e98256b54aaa7856daedc516d5 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UY5 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 97dfb2e98256b54aaa7856daedc516d5 1 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 97dfb2e98256b54aaa7856daedc516d5 1 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=97dfb2e98256b54aaa7856daedc516d5 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:40:56.470 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UY5 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UY5 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UY5 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1c423f9165b20c32b1738a8a4883233a 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.P8d 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1c423f9165b20c32b1738a8a4883233a 1 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1c423f9165b20c32b1738a8a4883233a 1 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1c423f9165b20c32b1738a8a4883233a 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.P8d 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.P8d 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.P8d 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4bb9220e5e002e9482282e251835b3e153b298dfb9c46fe7 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WTg 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4bb9220e5e002e9482282e251835b3e153b298dfb9c46fe7 2 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4bb9220e5e002e9482282e251835b3e153b298dfb9c46fe7 2 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4bb9220e5e002e9482282e251835b3e153b298dfb9c46fe7 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WTg 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WTg 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WTg 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:56.730 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0bd47812ca5e089f34aad117c43ea8ac 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AGy 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0bd47812ca5e089f34aad117c43ea8ac 0 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0bd47812ca5e089f34aad117c43ea8ac 0 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0bd47812ca5e089f34aad117c43ea8ac 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AGy 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AGy 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.AGy 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=85aa63d2c43dd544d77a9ae230082ad947fd203017135798ccdc2c2ef1db8579 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pAg 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 85aa63d2c43dd544d77a9ae230082ad947fd203017135798ccdc2c2ef1db8579 3 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 85aa63d2c43dd544d77a9ae230082ad947fd203017135798ccdc2c2ef1db8579 3 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=85aa63d2c43dd544d77a9ae230082ad947fd203017135798ccdc2c2ef1db8579 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:40:56.731 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pAg 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pAg 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pAg 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3314748 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3314748 ']' 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Krt 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Rwx ]] 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rwx 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eTG 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.990 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.u93 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u93 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UY5 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.P8d ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P8d 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WTg 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.AGy ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.AGy 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pAg 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:57.250 15:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:41:00.540 Waiting for block devices as requested 00:41:00.540 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:41:00.540 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:00.540 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:00.540 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:00.798 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:00.798 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:00.798 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:01.057 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:01.057 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:01.057 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:01.317 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:01.317 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:01.317 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:01.576 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:01.576 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:01.576 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:01.835 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:02.402 No valid GPT data, bailing 00:41:02.402 15:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:02.402 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:41:02.402 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:41:02.402 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:41:02.403 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:41:02.662 00:41:02.662 Discovery Log Number of Records 2, Generation counter 2 00:41:02.662 =====Discovery Log Entry 0====== 00:41:02.662 trtype: rdma 00:41:02.662 adrfam: ipv4 00:41:02.662 subtype: current discovery subsystem 00:41:02.662 treq: not specified, sq flow control disable supported 00:41:02.662 portid: 1 00:41:02.662 trsvcid: 4420 00:41:02.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:02.662 traddr: 192.168.100.8 00:41:02.662 eflags: none 00:41:02.662 rdma_prtype: not specified 00:41:02.662 rdma_qptype: connected 00:41:02.662 rdma_cms: rdma-cm 00:41:02.662 rdma_pkey: 0x0000 00:41:02.662 =====Discovery Log Entry 1====== 00:41:02.662 trtype: rdma 00:41:02.662 adrfam: ipv4 00:41:02.662 subtype: nvme subsystem 00:41:02.662 treq: not specified, sq flow control disable supported 00:41:02.662 portid: 1 00:41:02.662 trsvcid: 4420 00:41:02.662 subnqn: nqn.2024-02.io.spdk:cnode0 00:41:02.662 traddr: 192.168.100.8 00:41:02.662 eflags: none 00:41:02.662 rdma_prtype: not specified 00:41:02.662 rdma_qptype: connected 00:41:02.662 rdma_cms: rdma-cm 00:41:02.662 rdma_pkey: 0x0000 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.662 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.922 nvme0n1 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.922 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.181 nvme0n1 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.181 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.441 15:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.441 nvme0n1 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.441 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.700 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.701 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.960 nvme0n1 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:03.960 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.961 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.220 nvme0n1 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.220 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.480 nvme0n1 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.480 15:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.480 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.740 nvme0n1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.740 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.000 nvme0n1 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.000 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.260 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.520 nvme0n1 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.520 15:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.520 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.780 nvme0n1 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.780 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.040 nvme0n1 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.040 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.300 nvme0n1 00:41:06.300 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.300 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:06.300 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:06.300 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.300 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.559 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.559 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:06.559 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:06.559 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.559 15:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:06.559 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.560 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.870 nvme0n1 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.870 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.182 nvme0n1 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.182 15:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.468 nvme0n1 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.468 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.728 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.987 nvme0n1 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.988 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.556 nvme0n1 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.556 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:08.557 15:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.557 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.816 nvme0n1 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.816 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.075 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.334 nvme0n1 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.334 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.593 15:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.593 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.852 nvme0n1 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.853 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.112 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.372 nvme0n1 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:10.372 15:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:10.372 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.310 nvme0n1 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:11.310 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:11.311 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:11.311 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.311 15:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.879 nvme0n1 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:11.879 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.448 nvme0n1 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.448 15:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:12.448 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.386 nvme0n1 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.386 15:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.956 nvme0n1 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.956 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.216 nvme0n1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.216 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 nvme0n1 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.476 15:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.476 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.736 nvme0n1 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:14.736 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:14.737 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:14.737 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.737 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.996 nvme0n1 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:14.996 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.997 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.256 nvme0n1 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.256 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.516 15:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.516 nvme0n1 00:41:15.516 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.516 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:15.516 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:15.516 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.516 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.776 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.035 nvme0n1 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:16.035 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.036 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.296 nvme0n1 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.296 15:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.556 nvme0n1 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:16.556 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.815 nvme0n1 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:16.815 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.075 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.335 nvme0n1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.335 15:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.594 nvme0n1 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.594 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.854 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.113 nvme0n1 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:41:18.113 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.114 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.373 nvme0n1 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.373 15:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.632 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.633 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.892 nvme0n1 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:18.892 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 nvme0n1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.461 15:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.720 nvme0n1 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.720 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.980 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 nvme0n1 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.239 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.498 15:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.757 nvme0n1 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:20.757 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:20.758 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:21.017 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:21.017 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.017 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.276 nvme0n1 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:21.276 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.277 15:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.845 nvme0n1 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:21.845 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:22.133 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.134 15:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.705 nvme0n1 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:22.705 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:22.706 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.273 nvme0n1 00:41:23.273 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.273 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:23.273 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.274 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:23.533 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:23.534 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.534 15:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.102 nvme0n1 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.102 15:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.670 nvme0n1 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.670 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.008 nvme0n1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.008 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.267 nvme0n1 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:25.267 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.268 15:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.527 nvme0n1 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:25.527 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.786 nvme0n1 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.786 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.045 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.046 nvme0n1 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.046 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.305 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.565 nvme0n1 00:41:26.565 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.565 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.565 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.565 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.565 15:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.565 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.825 nvme0n1 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.825 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.084 nvme0n1 00:41:27.084 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.084 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.084 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.084 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.084 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.085 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.344 nvme0n1 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.344 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.603 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.603 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.603 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.603 15:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.603 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.862 nvme0n1 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:27.862 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.863 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.122 nvme0n1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:28.122 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.123 15:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.382 nvme0n1 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.642 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 nvme0n1 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.161 nvme0n1 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.161 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:29.421 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:29.422 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:29.422 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:29.422 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.422 15:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.681 nvme0n1 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.681 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.250 nvme0n1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.250 15:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.509 nvme0n1 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.509 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.768 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.026 nvme0n1 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.027 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.285 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.285 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.285 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.286 15:45:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.545 nvme0n1 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.545 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.804 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 nvme0n1 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzUzZDk2YTQ2NTVjNWUyMDI3Njk4NzM3MjViNzQyZThflMGD: 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: ]] 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTEzMWVlMjllYmM5NDU1NmEyN2U4YWQ5N2U4NzcwZDZkZDMwMTI4OGFjY2JjMGFlZGViMjNjNzA1YjYzNjhiNTDuxnE=: 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.064 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.323 15:45:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.890 nvme0n1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.890 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.458 nvme0n1 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.458 15:46:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:33.458 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.026 nvme0n1 00:41:34.026 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.026 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.026 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.026 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.026 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGJiOTIyMGU1ZTAwMmU5NDgyMjgyZTI1MTgzNWIzZTE1M2IyOThkZmI5YzQ2ZmU37+un1g==: 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGJkNDc4MTJjYTVlMDg5ZjM0YWFkMTE3YzQzZWE4YWPdVToV: 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.285 15:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.853 nvme0n1 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhYTYzZDJjNDNkZDU0NGQ3N2E5YWUyMzAwODJhZDk0N2ZkMjAzMDE3MTM1Nzk4Y2NkYzJjMmVmMWRiODU3OR0Smzs=: 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.853 15:46:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.421 nvme0n1 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.421 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.680 request: 00:41:35.680 { 00:41:35.680 "name": "nvme0", 00:41:35.680 "trtype": "rdma", 00:41:35.680 "traddr": "192.168.100.8", 00:41:35.680 "adrfam": "ipv4", 00:41:35.680 "trsvcid": "4420", 00:41:35.680 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:35.680 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:35.680 "prchk_reftag": false, 00:41:35.680 "prchk_guard": false, 00:41:35.680 "hdgst": false, 00:41:35.680 "ddgst": false, 00:41:35.680 "allow_unrecognized_csi": false, 00:41:35.680 "method": "bdev_nvme_attach_controller", 00:41:35.680 "req_id": 1 00:41:35.680 } 00:41:35.680 Got JSON-RPC error response 00:41:35.680 response: 00:41:35.680 { 00:41:35.680 "code": -5, 00:41:35.680 "message": "Input/output error" 00:41:35.680 } 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.680 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.939 request: 00:41:35.939 { 00:41:35.939 "name": "nvme0", 00:41:35.939 "trtype": "rdma", 00:41:35.939 "traddr": "192.168.100.8", 00:41:35.939 "adrfam": "ipv4", 00:41:35.939 "trsvcid": "4420", 00:41:35.939 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:35.939 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:35.939 "prchk_reftag": false, 00:41:35.939 "prchk_guard": false, 00:41:35.939 "hdgst": false, 00:41:35.939 "ddgst": false, 00:41:35.939 "dhchap_key": "key2", 00:41:35.939 "allow_unrecognized_csi": false, 00:41:35.939 "method": "bdev_nvme_attach_controller", 00:41:35.939 "req_id": 1 00:41:35.939 } 00:41:35.939 Got JSON-RPC error response 00:41:35.939 response: 00:41:35.939 { 00:41:35.939 "code": -5, 00:41:35.939 "message": "Input/output error" 00:41:35.939 } 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:35.939 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.940 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.199 request: 00:41:36.199 { 00:41:36.199 "name": "nvme0", 00:41:36.199 "trtype": "rdma", 00:41:36.199 "traddr": "192.168.100.8", 00:41:36.199 "adrfam": "ipv4", 00:41:36.199 "trsvcid": "4420", 00:41:36.199 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:36.199 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:36.199 "prchk_reftag": false, 00:41:36.199 "prchk_guard": false, 00:41:36.199 "hdgst": false, 00:41:36.199 "ddgst": false, 00:41:36.199 "dhchap_key": "key1", 00:41:36.199 "dhchap_ctrlr_key": "ckey2", 00:41:36.199 "allow_unrecognized_csi": false, 00:41:36.199 "method": "bdev_nvme_attach_controller", 00:41:36.199 "req_id": 1 00:41:36.199 } 00:41:36.199 Got JSON-RPC error response 00:41:36.199 response: 00:41:36.199 { 00:41:36.199 "code": -5, 00:41:36.199 "message": "Input/output error" 00:41:36.199 } 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.199 nvme0n1 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.199 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.459 request: 00:41:36.459 { 00:41:36.459 "name": "nvme0", 00:41:36.459 "dhchap_key": "key1", 00:41:36.459 "dhchap_ctrlr_key": "ckey2", 00:41:36.459 "method": "bdev_nvme_set_keys", 00:41:36.459 "req_id": 1 00:41:36.459 } 00:41:36.459 Got JSON-RPC error response 00:41:36.459 response: 00:41:36.459 { 00:41:36.459 "code": -13, 00:41:36.459 "message": "Permission denied" 00:41:36.459 } 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:41:36.459 15:46:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:41:37.396 15:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:37.396 15:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:37.396 15:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.396 15:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.655 15:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.655 15:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:41:37.655 15:46:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:41:38.590 15:46:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:41:39.527 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:39.527 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:39.527 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:39.527 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.527 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcwOWU3NzJkYjI1Yzc2MDFlY2UxYTVjY2UyMGNhNDY0MGFiZDE1MzA1ZjcxOTdmJlElew==: 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: ]] 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYwMzJjZDRjNDcxYTNhMzg5MDhkZWZiZjk1YWE3ZTNjMmQ3MzA2YWM2MmE0OGY1CsgUZQ==: 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:41:39.786 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.787 nvme0n1 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTdkZmIyZTk4MjU2YjU0YWFhNzg1NmRhZWRjNTE2ZDXTJ9TU: 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: ]] 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWM0MjNmOTE2NWIyMGMzMmIxNzM4YThhNDg4MzIzM2EdrQPk: 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:39.787 15:46:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.163 request: 00:41:41.163 { 00:41:41.163 "name": "nvme0", 00:41:41.163 "dhchap_key": "key2", 00:41:41.163 "dhchap_ctrlr_key": "ckey1", 00:41:41.163 "method": "bdev_nvme_set_keys", 00:41:41.163 "req_id": 1 00:41:41.163 } 00:41:41.163 Got JSON-RPC error response 00:41:41.163 response: 00:41:41.163 { 00:41:41.163 "code": -13, 00:41:41.163 "message": "Permission denied" 00:41:41.163 } 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:41:41.163 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:41:41.164 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.164 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.164 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.164 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:41:41.164 15:46:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:41:42.101 rmmod nvme_rdma 00:41:42.101 rmmod nvme_fabrics 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3314748 ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3314748 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3314748 ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3314748 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3314748 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3314748' 00:41:42.101 killing process with pid 3314748 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3314748 00:41:42.101 15:46:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3314748 00:41:54.464 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 976: 3314748 Aborted (core dumped) "${NVMF_APP[@]}" "$@" 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:41:54.464 15:46:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:41:56.369 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:56.369 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:01.642 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:42:01.642 15:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Krt /tmp/spdk.key-null.eTG /tmp/spdk.key-sha256.UY5 /tmp/spdk.key-sha384.WTg /tmp/spdk.key-sha512.pAg /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:42:01.642 15:46:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:42:04.936 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:04.936 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:04.936 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:04.936 00:42:04.936 real 1m16.525s 00:42:04.936 user 0m51.992s 00:42:04.936 sys 0m16.856s 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.936 ************************************ 00:42:04.936 END TEST nvmf_auth_host 00:42:04.936 ************************************ 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:42:04.936 15:46:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.937 ************************************ 00:42:04.937 START TEST nvmf_bdevperf 00:42:04.937 ************************************ 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:42:04.937 * Looking for test storage... 00:42:04.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:04.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.937 --rc genhtml_branch_coverage=1 00:42:04.937 --rc genhtml_function_coverage=1 00:42:04.937 --rc genhtml_legend=1 00:42:04.937 --rc geninfo_all_blocks=1 00:42:04.937 --rc geninfo_unexecuted_blocks=1 00:42:04.937 00:42:04.937 ' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:04.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.937 --rc genhtml_branch_coverage=1 00:42:04.937 --rc genhtml_function_coverage=1 00:42:04.937 --rc genhtml_legend=1 00:42:04.937 --rc geninfo_all_blocks=1 00:42:04.937 --rc geninfo_unexecuted_blocks=1 00:42:04.937 00:42:04.937 ' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:04.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.937 --rc genhtml_branch_coverage=1 00:42:04.937 --rc genhtml_function_coverage=1 00:42:04.937 --rc genhtml_legend=1 00:42:04.937 --rc geninfo_all_blocks=1 00:42:04.937 --rc geninfo_unexecuted_blocks=1 00:42:04.937 00:42:04.937 ' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:04.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.937 --rc genhtml_branch_coverage=1 00:42:04.937 --rc genhtml_function_coverage=1 00:42:04.937 --rc genhtml_legend=1 00:42:04.937 --rc geninfo_all_blocks=1 00:42:04.937 --rc geninfo_unexecuted_blocks=1 00:42:04.937 00:42:04.937 ' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.937 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:04.937 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:42:04.938 15:46:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:13.060 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:42:13.061 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:42:13.061 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:42:13.061 Found net devices under 0000:18:00.0: mlx_0_0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:42:13.061 Found net devices under 0000:18:00.1: mlx_0_1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:42:13.061 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:42:13.061 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:42:13.061 altname enp24s0f0np0 00:42:13.061 altname ens785f0np0 00:42:13.061 inet 192.168.100.8/24 scope global mlx_0_0 00:42:13.061 valid_lft forever preferred_lft forever 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:42:13.061 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:42:13.061 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:42:13.061 altname enp24s0f1np1 00:42:13.061 altname ens785f1np1 00:42:13.061 inet 192.168.100.9/24 scope global mlx_0_1 00:42:13.061 valid_lft forever preferred_lft forever 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:42:13.061 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:42:13.062 192.168.100.9' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:42:13.062 192.168.100.9' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:42:13.062 192.168.100.9' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3328817 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3328817 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3328817 ']' 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:13.062 15:46:39 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.062 [2024-11-06 15:46:39.551981] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:13.062 [2024-11-06 15:46:39.552092] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:13.062 [2024-11-06 15:46:39.702885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:13.062 [2024-11-06 15:46:39.809307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:13.062 [2024-11-06 15:46:39.809359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:13.062 [2024-11-06 15:46:39.809372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:13.062 [2024-11-06 15:46:39.809384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:13.062 [2024-11-06 15:46:39.809393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:13.062 [2024-11-06 15:46:39.811578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:13.062 [2024-11-06 15:46:39.811637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:13.062 [2024-11-06 15:46:39.811625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.062 [2024-11-06 15:46:40.441160] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fb16311d940) succeed. 00:42:13.062 [2024-11-06 15:46:40.450608] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fb1627bd940) succeed. 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.062 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.322 Malloc0 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:13.322 [2024-11-06 15:46:40.775646] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:13.322 { 00:42:13.322 "params": { 00:42:13.322 "name": "Nvme$subsystem", 00:42:13.322 "trtype": "$TEST_TRANSPORT", 00:42:13.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:13.322 "adrfam": "ipv4", 00:42:13.322 "trsvcid": "$NVMF_PORT", 00:42:13.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:13.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:13.322 "hdgst": ${hdgst:-false}, 00:42:13.322 "ddgst": ${ddgst:-false} 00:42:13.322 }, 00:42:13.322 "method": "bdev_nvme_attach_controller" 00:42:13.322 } 00:42:13.322 EOF 00:42:13.322 )") 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:42:13.322 15:46:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:13.322 "params": { 00:42:13.322 "name": "Nvme1", 00:42:13.322 "trtype": "rdma", 00:42:13.322 "traddr": "192.168.100.8", 00:42:13.322 "adrfam": "ipv4", 00:42:13.322 "trsvcid": "4420", 00:42:13.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:13.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:13.322 "hdgst": false, 00:42:13.322 "ddgst": false 00:42:13.322 }, 00:42:13.322 "method": "bdev_nvme_attach_controller" 00:42:13.322 }' 00:42:13.322 [2024-11-06 15:46:40.869413] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:13.322 [2024-11-06 15:46:40.869531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329010 ] 00:42:13.581 [2024-11-06 15:46:41.016446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.581 [2024-11-06 15:46:41.128629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.150 Running I/O for 1 seconds... 00:42:15.090 15011.00 IOPS, 58.64 MiB/s 00:42:15.090 Latency(us) 00:42:15.090 [2024-11-06T14:46:42.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.090 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:15.090 Verification LBA range: start 0x0 length 0x4000 00:42:15.090 Nvme1n1 : 1.01 15048.39 58.78 0.00 0.00 8457.67 3248.31 19033.93 00:42:15.090 [2024-11-06T14:46:42.725Z] =================================================================================================================== 00:42:15.090 [2024-11-06T14:46:42.725Z] Total : 15048.39 58.78 0.00 0.00 8457.67 3248.31 19033.93 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3329293 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:42:16.028 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:16.029 { 00:42:16.029 "params": { 00:42:16.029 "name": "Nvme$subsystem", 00:42:16.029 "trtype": "$TEST_TRANSPORT", 00:42:16.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:16.029 "adrfam": "ipv4", 00:42:16.029 "trsvcid": "$NVMF_PORT", 00:42:16.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:16.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:16.029 "hdgst": ${hdgst:-false}, 00:42:16.029 "ddgst": ${ddgst:-false} 00:42:16.029 }, 00:42:16.029 "method": "bdev_nvme_attach_controller" 00:42:16.029 } 00:42:16.029 EOF 00:42:16.029 )") 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:42:16.029 15:46:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:16.029 "params": { 00:42:16.029 "name": "Nvme1", 00:42:16.029 "trtype": "rdma", 00:42:16.029 "traddr": "192.168.100.8", 00:42:16.029 "adrfam": "ipv4", 00:42:16.029 "trsvcid": "4420", 00:42:16.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:16.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:16.029 "hdgst": false, 00:42:16.029 "ddgst": false 00:42:16.029 }, 00:42:16.029 "method": "bdev_nvme_attach_controller" 00:42:16.029 }' 00:42:16.029 [2024-11-06 15:46:43.557286] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:16.029 [2024-11-06 15:46:43.557386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329293 ] 00:42:16.288 [2024-11-06 15:46:43.711174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:16.288 [2024-11-06 15:46:43.821993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.857 Running I/O for 15 seconds... 00:42:18.731 15042.00 IOPS, 58.76 MiB/s [2024-11-06T14:46:46.626Z] 15168.00 IOPS, 59.25 MiB/s [2024-11-06T14:46:46.626Z] 15:46:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3328817 00:42:18.991 15:46:46 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:42:19.931 11404.67 IOPS, 44.55 MiB/s [2024-11-06T14:46:47.566Z] [2024-11-06 15:46:47.528890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.528960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x181b00 00:42:19.931 [2024-11-06 15:46:47.529752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.931 [2024-11-06 15:46:47.529766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.529979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.529992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.932 [2024-11-06 15:46:47.530712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x181b00 00:42:19.932 [2024-11-06 15:46:47.530723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.530985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.530997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x181b00 00:42:19.933 [2024-11-06 15:46:47.531282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.933 [2024-11-06 15:46:47.531690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.933 [2024-11-06 15:46:47.531702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.531984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.531997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.532205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:19.934 [2024-11-06 15:46:47.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.534388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:42:19.934 [2024-11-06 15:46:47.534414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:42:19.934 [2024-11-06 15:46:47.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12584 len:8 PRP1 0x0 PRP2 0x0 00:42:19.934 [2024-11-06 15:46:47.534444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:19.934 [2024-11-06 15:46:47.537596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:42:20.194 [2024-11-06 15:46:47.564531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:42:20.194 [2024-11-06 15:46:47.570679] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:42:20.194 [2024-11-06 15:46:47.570757] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:42:20.194 [2024-11-06 15:46:47.570797] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:42:21.020 8553.50 IOPS, 33.41 MiB/s [2024-11-06T14:46:48.655Z] [2024-11-06 15:46:48.575257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:42:21.020 [2024-11-06 15:46:48.575345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:42:21.020 [2024-11-06 15:46:48.576008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:42:21.020 [2024-11-06 15:46:48.576057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:42:21.020 [2024-11-06 15:46:48.576099] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:42:21.020 [2024-11-06 15:46:48.576163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:42:21.020 [2024-11-06 15:46:48.579493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:42:21.020 [2024-11-06 15:46:48.583958] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:42:21.020 [2024-11-06 15:46:48.584046] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:42:21.020 [2024-11-06 15:46:48.584085] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:42:21.958 6842.80 IOPS, 26.73 MiB/s [2024-11-06T14:46:49.593Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3328817 Killed "${NVMF_APP[@]}" "$@" 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3330112 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3330112 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3330112 ']' 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:21.958 15:46:49 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:21.958 [2024-11-06 15:46:49.583991] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:21.958 [2024-11-06 15:46:49.584098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:21.958 [2024-11-06 15:46:49.588397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:42:21.958 [2024-11-06 15:46:49.588438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:42:21.958 [2024-11-06 15:46:49.588650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:42:21.958 [2024-11-06 15:46:49.588668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:42:21.958 [2024-11-06 15:46:49.588684] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:42:21.958 [2024-11-06 15:46:49.588704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:42:22.217 [2024-11-06 15:46:49.595765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:42:22.218 [2024-11-06 15:46:49.598735] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:42:22.218 [2024-11-06 15:46:49.598765] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:42:22.218 [2024-11-06 15:46:49.598777] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:42:22.218 [2024-11-06 15:46:49.744115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:22.475 [2024-11-06 15:46:49.855292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:22.475 [2024-11-06 15:46:49.855345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:22.475 [2024-11-06 15:46:49.855358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:22.475 [2024-11-06 15:46:49.855372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:22.475 [2024-11-06 15:46:49.855382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:22.475 [2024-11-06 15:46:49.857503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:22.475 [2024-11-06 15:46:49.857575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:22.475 [2024-11-06 15:46:49.857601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:22.993 5702.33 IOPS, 22.27 MiB/s [2024-11-06T14:46:50.628Z] 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:22.993 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:22.993 [2024-11-06 15:46:50.466085] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7feb719a4940) succeed. 00:42:22.993 [2024-11-06 15:46:50.475659] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7feb7195e940) succeed. 00:42:22.993 [2024-11-06 15:46:50.602812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:42:22.993 [2024-11-06 15:46:50.602864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:42:22.993 [2024-11-06 15:46:50.603074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:42:22.993 [2024-11-06 15:46:50.603092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:42:22.993 [2024-11-06 15:46:50.603107] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:42:22.993 [2024-11-06 15:46:50.603134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:42:22.993 [2024-11-06 15:46:50.609255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:42:22.993 [2024-11-06 15:46:50.612406] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:42:22.993 [2024-11-06 15:46:50.612434] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:42:22.993 [2024-11-06 15:46:50.612447] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:23.252 Malloc0 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:23.252 [2024-11-06 15:46:50.785177] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.252 15:46:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3329293 00:42:24.080 4887.71 IOPS, 19.09 MiB/s [2024-11-06T14:46:51.715Z] [2024-11-06 15:46:51.616622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:42:24.080 [2024-11-06 15:46:51.616673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:42:24.080 [2024-11-06 15:46:51.616876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:42:24.080 [2024-11-06 15:46:51.616893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:42:24.080 [2024-11-06 15:46:51.616910] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:42:24.080 [2024-11-06 15:46:51.616931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:42:24.080 [2024-11-06 15:46:51.626614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:42:24.080 [2024-11-06 15:46:51.662302] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:42:26.027 5362.75 IOPS, 20.95 MiB/s [2024-11-06T14:46:54.599Z] 6466.00 IOPS, 25.26 MiB/s [2024-11-06T14:46:55.537Z] 7346.50 IOPS, 28.70 MiB/s [2024-11-06T14:46:56.474Z] 8068.91 IOPS, 31.52 MiB/s [2024-11-06T14:46:57.411Z] 8670.92 IOPS, 33.87 MiB/s [2024-11-06T14:46:58.348Z] 9178.38 IOPS, 35.85 MiB/s [2024-11-06T14:46:59.286Z] 9615.29 IOPS, 37.56 MiB/s [2024-11-06T14:46:59.286Z] 9992.40 IOPS, 39.03 MiB/s 00:42:31.651 Latency(us) 00:42:31.651 [2024-11-06T14:46:59.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.651 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:31.651 Verification LBA range: start 0x0 length 0x4000 00:42:31.651 Nvme1n1 : 15.01 9992.48 39.03 12085.66 0.00 5776.16 591.25 1064988.49 00:42:31.651 [2024-11-06T14:46:59.286Z] =================================================================================================================== 00:42:31.651 [2024-11-06T14:46:59.286Z] Total : 9992.48 39.03 12085.66 0.00 5776.16 591.25 1064988.49 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:42:33.028 rmmod nvme_rdma 00:42:33.028 rmmod nvme_fabrics 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3330112 ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3330112 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3330112 ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3330112 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3330112 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3330112' 00:42:33.028 killing process with pid 3330112 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3330112 00:42:33.028 15:47:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3330112 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:42:34.935 00:42:34.935 real 0m29.843s 00:42:34.935 user 1m16.631s 00:42:34.935 sys 0m7.265s 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:34.935 ************************************ 00:42:34.935 END TEST nvmf_bdevperf 00:42:34.935 ************************************ 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:34.935 ************************************ 00:42:34.935 START TEST nvmf_target_disconnect 00:42:34.935 ************************************ 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:42:34.935 * Looking for test storage... 00:42:34.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:42:34.935 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.936 --rc genhtml_branch_coverage=1 00:42:34.936 --rc genhtml_function_coverage=1 00:42:34.936 --rc genhtml_legend=1 00:42:34.936 --rc geninfo_all_blocks=1 00:42:34.936 --rc geninfo_unexecuted_blocks=1 00:42:34.936 00:42:34.936 ' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.936 --rc genhtml_branch_coverage=1 00:42:34.936 --rc genhtml_function_coverage=1 00:42:34.936 --rc genhtml_legend=1 00:42:34.936 --rc geninfo_all_blocks=1 00:42:34.936 --rc geninfo_unexecuted_blocks=1 00:42:34.936 00:42:34.936 ' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.936 --rc genhtml_branch_coverage=1 00:42:34.936 --rc genhtml_function_coverage=1 00:42:34.936 --rc genhtml_legend=1 00:42:34.936 --rc geninfo_all_blocks=1 00:42:34.936 --rc geninfo_unexecuted_blocks=1 00:42:34.936 00:42:34.936 ' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:34.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.936 --rc genhtml_branch_coverage=1 00:42:34.936 --rc genhtml_function_coverage=1 00:42:34.936 --rc genhtml_legend=1 00:42:34.936 --rc geninfo_all_blocks=1 00:42:34.936 --rc geninfo_unexecuted_blocks=1 00:42:34.936 00:42:34.936 ' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:34.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:34.936 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:42:34.937 15:47:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:41.508 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:42:41.509 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:42:41.509 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:42:41.509 Found net devices under 0000:18:00.0: mlx_0_0 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:42:41.509 Found net devices under 0000:18:00.1: mlx_0_1 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:42:41.509 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:42:41.769 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:42:41.769 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:42:41.769 altname enp24s0f0np0 00:42:41.769 altname ens785f0np0 00:42:41.769 inet 192.168.100.8/24 scope global mlx_0_0 00:42:41.769 valid_lft forever preferred_lft forever 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:42:41.769 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:42:41.769 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:42:41.770 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:42:41.770 altname enp24s0f1np1 00:42:41.770 altname ens785f1np1 00:42:41.770 inet 192.168.100.9/24 scope global mlx_0_1 00:42:41.770 valid_lft forever preferred_lft forever 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:42:41.770 192.168.100.9' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:42:41.770 192.168.100.9' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:42:41.770 192.168.100.9' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:41.770 ************************************ 00:42:41.770 START TEST nvmf_target_disconnect_tc1 00:42:41.770 ************************************ 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:42:41.770 15:47:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:42:42.029 [2024-11-06 15:47:09.592000] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:42:42.029 [2024-11-06 15:47:09.592073] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:42:42.029 [2024-11-06 15:47:09.592087] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:42:42.964 [2024-11-06 15:47:10.596737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:42:42.964 [2024-11-06 15:47:10.596845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:42:42.964 [2024-11-06 15:47:10.596899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:42:42.964 [2024-11-06 15:47:10.597065] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:42.964 [2024-11-06 15:47:10.597115] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:42:42.964 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:42:42.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:42:43.224 Initializing NVMe Controllers 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:43.224 00:42:43.224 real 0m1.347s 00:42:43.224 user 0m0.955s 00:42:43.224 sys 0m0.376s 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:42:43.224 ************************************ 00:42:43.224 END TEST nvmf_target_disconnect_tc1 00:42:43.224 ************************************ 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:43.224 ************************************ 00:42:43.224 START TEST nvmf_target_disconnect_tc2 00:42:43.224 ************************************ 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3334706 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3334706 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3334706 ']' 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:43.224 15:47:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:43.484 [2024-11-06 15:47:10.901708] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:43.484 [2024-11-06 15:47:10.901814] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:43.484 [2024-11-06 15:47:11.050795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:43.743 [2024-11-06 15:47:11.159437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:43.743 [2024-11-06 15:47:11.159491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:43.743 [2024-11-06 15:47:11.159503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:43.743 [2024-11-06 15:47:11.159533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:43.743 [2024-11-06 15:47:11.159543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:43.743 [2024-11-06 15:47:11.161783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:43.743 [2024-11-06 15:47:11.161873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:43.743 [2024-11-06 15:47:11.161937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:43.743 [2024-11-06 15:47:11.161966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.312 Malloc0 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.312 15:47:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.312 [2024-11-06 15:47:11.851547] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f8d3b753940) succeed. 00:42:44.312 [2024-11-06 15:47:11.861630] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f8d3b70f940) succeed. 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.571 [2024-11-06 15:47:12.155427] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3334903 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:42:44.571 15:47:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:42:47.108 15:47:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3334706 00:42:47.108 15:47:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Read completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 Write completed with error (sct=0, sc=8) 00:42:48.046 starting I/O failed 00:42:48.046 [2024-11-06 15:47:15.476858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:42:48.614 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3334706 Killed "${NVMF_APP[@]}" "$@" 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3335449 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3335449 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3335449 ']' 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:48.614 15:47:16 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:48.874 [2024-11-06 15:47:16.276900] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:48.874 [2024-11-06 15:47:16.277003] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:48.874 [2024-11-06 15:47:16.439457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Write completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 Read completed with error (sct=0, sc=8) 00:42:48.874 starting I/O failed 00:42:48.874 [2024-11-06 15:47:16.483028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:42:49.134 [2024-11-06 15:47:16.553520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:49.134 [2024-11-06 15:47:16.553570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:49.134 [2024-11-06 15:47:16.553583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:49.134 [2024-11-06 15:47:16.553597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:49.134 [2024-11-06 15:47:16.553607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:49.134 [2024-11-06 15:47:16.555873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:49.134 [2024-11-06 15:47:16.555960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:49.134 [2024-11-06 15:47:16.556022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:49.134 [2024-11-06 15:47:16.556048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:49.494 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:49.494 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:42:49.494 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:49.494 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:49.494 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:49.775 Malloc0 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:49.775 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:49.775 [2024-11-06 15:47:17.263089] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7fd8ff71a940) succeed. 00:42:49.775 [2024-11-06 15:47:17.273116] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7fd8ff548940) succeed. 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Read completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.056 Write completed with error (sct=0, sc=8) 00:42:50.056 starting I/O failed 00:42:50.057 Read completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Read completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Read completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Read completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Read completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 Write completed with error (sct=0, sc=8) 00:42:50.057 starting I/O failed 00:42:50.057 [2024-11-06 15:47:17.488628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:50.057 [2024-11-06 15:47:17.573536] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.057 15:47:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3334903 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Read completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 Write completed with error (sct=0, sc=8) 00:42:50.994 starting I/O failed 00:42:50.994 [2024-11-06 15:47:18.494242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 [2024-11-06 15:47:18.500470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.500578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.500618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.500643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.500668] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.510378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.520239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.520336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.520367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.520393] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.520411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.530242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.540067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.540166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.540205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.540226] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.540249] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.550030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.560161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.560243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.560277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.560302] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.560319] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.570296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.580138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.580221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.580256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.580277] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.580301] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.590316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.600220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.994 [2024-11-06 15:47:18.600310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.994 [2024-11-06 15:47:18.600339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.994 [2024-11-06 15:47:18.600363] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.994 [2024-11-06 15:47:18.600381] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:50.994 [2024-11-06 15:47:18.610376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:50.994 qpair failed and we were unable to recover it. 00:42:50.994 [2024-11-06 15:47:18.620239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:50.995 [2024-11-06 15:47:18.620310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:50.995 [2024-11-06 15:47:18.620351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:50.995 [2024-11-06 15:47:18.620378] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:50.995 [2024-11-06 15:47:18.620399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.630470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.640281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.640361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.640392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.640416] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.640433] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.650687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.660466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.660538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.660573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.660593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.660618] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.670879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.680524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.680600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.680634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.680659] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.680676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.690812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.700478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.700548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.700584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.700605] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.700625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.710862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.720593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.720670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.720704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.720728] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.720745] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.730916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.740683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.740760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.740795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.740816] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.740837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.750915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.760739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.760822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.760856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.760884] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.760901] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.771074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.780691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.780771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.780807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.780827] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.780849] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.790874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.800751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.800824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.800856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.800885] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.800902] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.811021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.820784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.820862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.820900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.820921] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.820941] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.831258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.840923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.841003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.841034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.841058] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.841075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.851218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.860923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.255 [2024-11-06 15:47:18.860997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.255 [2024-11-06 15:47:18.861032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.255 [2024-11-06 15:47:18.861054] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.255 [2024-11-06 15:47:18.861074] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.255 [2024-11-06 15:47:18.871355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.255 qpair failed and we were unable to recover it. 00:42:51.255 [2024-11-06 15:47:18.880973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.256 [2024-11-06 15:47:18.881060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.256 [2024-11-06 15:47:18.881094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.256 [2024-11-06 15:47:18.881119] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.256 [2024-11-06 15:47:18.881145] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.891284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:18.901192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:18.901274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:18.901308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:18.901330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:18.901350] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.911268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:18.921238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:18.921314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:18.921346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:18.921370] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:18.921387] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.931501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:18.941166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:18.941239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:18.941278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:18.941299] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:18.941320] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.951472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:18.961379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:18.961459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:18.961490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:18.961515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:18.961537] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.971361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:18.981311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:18.981378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:18.981414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:18.981436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:18.981460] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:18.991597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:19.001391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:19.001468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:19.001499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:19.001522] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:19.001541] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:19.011655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:19.021432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:19.021506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:19.021543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:19.021565] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:19.021585] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:19.034713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:19.041502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:19.041585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:19.041616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:19.041641] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:19.041658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:19.051613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:19.061574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:19.061644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:19.061681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:19.061703] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:19.061727] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:19.071903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.516 [2024-11-06 15:47:19.081676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.516 [2024-11-06 15:47:19.081761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.516 [2024-11-06 15:47:19.081792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.516 [2024-11-06 15:47:19.081816] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.516 [2024-11-06 15:47:19.081834] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.516 [2024-11-06 15:47:19.091923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.516 qpair failed and we were unable to recover it. 00:42:51.517 [2024-11-06 15:47:19.101660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.517 [2024-11-06 15:47:19.101736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.517 [2024-11-06 15:47:19.101773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.517 [2024-11-06 15:47:19.101794] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.517 [2024-11-06 15:47:19.101815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.517 [2024-11-06 15:47:19.112169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.517 qpair failed and we were unable to recover it. 00:42:51.517 [2024-11-06 15:47:19.121726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.517 [2024-11-06 15:47:19.121802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.517 [2024-11-06 15:47:19.121834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.517 [2024-11-06 15:47:19.121864] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.517 [2024-11-06 15:47:19.121883] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.517 [2024-11-06 15:47:19.131969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.517 qpair failed and we were unable to recover it. 00:42:51.517 [2024-11-06 15:47:19.141860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.517 [2024-11-06 15:47:19.141942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.517 [2024-11-06 15:47:19.141978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.517 [2024-11-06 15:47:19.141999] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.517 [2024-11-06 15:47:19.142019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.152042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.161841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.161917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.161949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.161973] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.161991] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.172240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.181886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.181965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.182001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.182022] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.182046] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.192298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.202022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.202103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.202152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.202180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.202197] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.212344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.222057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.222154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.222192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.222214] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.222234] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.232505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.242135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.242215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.242247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.242272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.242289] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.252444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.262198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.262280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.262318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.262340] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.262361] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.272273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.282317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.282395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.282427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.777 [2024-11-06 15:47:19.282452] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.777 [2024-11-06 15:47:19.282470] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.777 [2024-11-06 15:47:19.292634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.777 qpair failed and we were unable to recover it. 00:42:51.777 [2024-11-06 15:47:19.302350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.777 [2024-11-06 15:47:19.302428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.777 [2024-11-06 15:47:19.302463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.302485] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.302515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.778 [2024-11-06 15:47:19.312650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.778 qpair failed and we were unable to recover it. 00:42:51.778 [2024-11-06 15:47:19.322383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.778 [2024-11-06 15:47:19.322457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.778 [2024-11-06 15:47:19.322491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.322517] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.322535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.778 [2024-11-06 15:47:19.332528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.778 qpair failed and we were unable to recover it. 00:42:51.778 [2024-11-06 15:47:19.342384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.778 [2024-11-06 15:47:19.342460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.778 [2024-11-06 15:47:19.342495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.342517] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.342538] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.778 [2024-11-06 15:47:19.352799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.778 qpair failed and we were unable to recover it. 00:42:51.778 [2024-11-06 15:47:19.362519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.778 [2024-11-06 15:47:19.362604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.778 [2024-11-06 15:47:19.362635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.362660] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.362679] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.778 [2024-11-06 15:47:19.372702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.778 qpair failed and we were unable to recover it. 00:42:51.778 [2024-11-06 15:47:19.382466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.778 [2024-11-06 15:47:19.382543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.778 [2024-11-06 15:47:19.382579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.382601] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.382624] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:51.778 [2024-11-06 15:47:19.392564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:51.778 qpair failed and we were unable to recover it. 00:42:51.778 [2024-11-06 15:47:19.402471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:51.778 [2024-11-06 15:47:19.402553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:51.778 [2024-11-06 15:47:19.402583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:51.778 [2024-11-06 15:47:19.402609] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:51.778 [2024-11-06 15:47:19.402626] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.412894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.422637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.422710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.422747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.422768] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.422789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.433076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.442725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.442800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.442833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.442861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.442879] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.452991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.462763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.462838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.462873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.462894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.462914] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.473035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.482825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.482920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.482949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.482974] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.482991] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.493171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.503345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.503432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.503468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.503488] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.503509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.513217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.522896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.522980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.523012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.523038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.523056] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.533387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.543007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.543093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.543135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.543158] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.543178] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.553461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.563142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.038 [2024-11-06 15:47:19.563223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.038 [2024-11-06 15:47:19.563255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.038 [2024-11-06 15:47:19.563286] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.038 [2024-11-06 15:47:19.563303] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.038 [2024-11-06 15:47:19.573683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.038 qpair failed and we were unable to recover it. 00:42:52.038 [2024-11-06 15:47:19.583189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.039 [2024-11-06 15:47:19.583264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.039 [2024-11-06 15:47:19.583303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.039 [2024-11-06 15:47:19.583324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.039 [2024-11-06 15:47:19.583348] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.039 [2024-11-06 15:47:19.593358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.039 qpair failed and we were unable to recover it. 00:42:52.039 [2024-11-06 15:47:19.603174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.039 [2024-11-06 15:47:19.603249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.039 [2024-11-06 15:47:19.603282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.039 [2024-11-06 15:47:19.603307] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.039 [2024-11-06 15:47:19.603325] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.039 [2024-11-06 15:47:19.613566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.039 qpair failed and we were unable to recover it. 00:42:52.039 [2024-11-06 15:47:19.623325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.039 [2024-11-06 15:47:19.623403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.039 [2024-11-06 15:47:19.623439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.039 [2024-11-06 15:47:19.623461] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.039 [2024-11-06 15:47:19.623485] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.039 [2024-11-06 15:47:19.633660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.039 qpair failed and we were unable to recover it. 00:42:52.039 [2024-11-06 15:47:19.643352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.039 [2024-11-06 15:47:19.643424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.039 [2024-11-06 15:47:19.643457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.039 [2024-11-06 15:47:19.643481] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.039 [2024-11-06 15:47:19.643499] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.039 [2024-11-06 15:47:19.653861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.039 qpair failed and we were unable to recover it. 00:42:52.039 [2024-11-06 15:47:19.663533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.039 [2024-11-06 15:47:19.663607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.039 [2024-11-06 15:47:19.663644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.039 [2024-11-06 15:47:19.663666] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.039 [2024-11-06 15:47:19.663687] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.673748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.683564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.683648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.683681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.683706] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.683724] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.693810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.703623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.703704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.703740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.703761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.703782] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.713886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.723574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.723660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.723689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.723714] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.723734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.734032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.743668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.743739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.743776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.743798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.743820] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.754085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.763694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.763773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.763807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.763835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.763853] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.774001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.299 qpair failed and we were unable to recover it. 00:42:52.299 [2024-11-06 15:47:19.783910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.299 [2024-11-06 15:47:19.783985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.299 [2024-11-06 15:47:19.784017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.299 [2024-11-06 15:47:19.784039] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.299 [2024-11-06 15:47:19.784055] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.299 [2024-11-06 15:47:19.794142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.803889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.803967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.803998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.804020] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.804037] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.814165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.823971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.824048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.824085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.824106] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.824130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.834113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.844011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.844084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.844116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.844145] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.844163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.854165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.864195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.864266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.864301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.864323] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.864340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.874382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.884196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.884263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.884295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.884316] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.884334] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.894291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.904203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.904276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.904309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.904336] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.904353] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.914518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.300 [2024-11-06 15:47:19.924234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.300 [2024-11-06 15:47:19.924303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.300 [2024-11-06 15:47:19.924335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.300 [2024-11-06 15:47:19.924356] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.300 [2024-11-06 15:47:19.924372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.300 [2024-11-06 15:47:19.934515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.300 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:19.944453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:19.944518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:19.944550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:19.944571] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:19.944588] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:19.954571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:19.964424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:19.964492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:19.964525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:19.964548] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:19.964567] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:19.974697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:19.984554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:19.984628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:19.984659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:19.984682] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:19.984699] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:19.994374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:20.004515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:20.004602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:20.004633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:20.004654] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:20.004672] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:20.014436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:20.024486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:20.024558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:20.024591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:20.024613] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:20.024631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:20.034675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:20.044624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:20.044697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:20.044729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:20.044751] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:20.044768] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:20.054851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:20.064681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:20.064752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:20.064785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.560 [2024-11-06 15:47:20.064807] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.560 [2024-11-06 15:47:20.064824] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.560 [2024-11-06 15:47:20.074616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.560 qpair failed and we were unable to recover it. 00:42:52.560 [2024-11-06 15:47:20.084718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.560 [2024-11-06 15:47:20.084792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.560 [2024-11-06 15:47:20.084825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.084847] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.084864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.094801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.561 [2024-11-06 15:47:20.104805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.561 [2024-11-06 15:47:20.104871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.561 [2024-11-06 15:47:20.104903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.104925] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.104942] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.114979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.561 [2024-11-06 15:47:20.124751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.561 [2024-11-06 15:47:20.124818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.561 [2024-11-06 15:47:20.124851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.124873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.124890] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.134924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.561 [2024-11-06 15:47:20.144917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.561 [2024-11-06 15:47:20.144987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.561 [2024-11-06 15:47:20.145018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.145039] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.145056] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.154962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.561 [2024-11-06 15:47:20.164881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.561 [2024-11-06 15:47:20.164952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.561 [2024-11-06 15:47:20.164994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.165015] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.165032] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.175059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.561 [2024-11-06 15:47:20.184982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.561 [2024-11-06 15:47:20.185044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.561 [2024-11-06 15:47:20.185076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.561 [2024-11-06 15:47:20.185098] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.561 [2024-11-06 15:47:20.185115] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.561 [2024-11-06 15:47:20.195036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.561 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.205039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.205109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.205148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.205169] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.205188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.215146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.225023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.225092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.225131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.225152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.225169] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.235166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.245116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.245187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.245219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.245245] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.245264] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.255280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.265183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.265256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.265289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.265311] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.265327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.275233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.285209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.285281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.285313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.285335] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.285351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.295257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.305388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.305456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.305489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.305511] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.305528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.315431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.325356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.325427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.325459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.325481] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.325498] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.335609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.345505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.345575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.345608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.345630] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.345647] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.355656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.365462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.365529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.365561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.365584] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.365604] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.375703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.385556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.385625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.385658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.385679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.385697] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.395713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.405535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.405600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.405632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.405653] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.405670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.821 [2024-11-06 15:47:20.415674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.821 qpair failed and we were unable to recover it. 00:42:52.821 [2024-11-06 15:47:20.425674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.821 [2024-11-06 15:47:20.425745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.821 [2024-11-06 15:47:20.425777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.821 [2024-11-06 15:47:20.425798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.821 [2024-11-06 15:47:20.425814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.822 [2024-11-06 15:47:20.437655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.822 qpair failed and we were unable to recover it. 00:42:52.822 [2024-11-06 15:47:20.445693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:52.822 [2024-11-06 15:47:20.445793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:52.822 [2024-11-06 15:47:20.445826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:52.822 [2024-11-06 15:47:20.445848] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:52.822 [2024-11-06 15:47:20.445867] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:52.822 [2024-11-06 15:47:20.456004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:52.822 qpair failed and we were unable to recover it. 00:42:53.081 [2024-11-06 15:47:20.465752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.081 [2024-11-06 15:47:20.465829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.081 [2024-11-06 15:47:20.465860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.081 [2024-11-06 15:47:20.465881] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.081 [2024-11-06 15:47:20.465898] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.475945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.485711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.485785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.485818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.485840] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.485858] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.495862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.505889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.505957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.505993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.506014] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.506033] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.515999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.525935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.526005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.526039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.526060] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.526077] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.536145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.546029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.546106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.546154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.546175] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.546192] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.556008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.565921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.565988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.566019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.566041] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.566058] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.576136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.586210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.586281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.586312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.586334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.586358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.596195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.606182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.606252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.606284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.606306] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.606326] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.616255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.626210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.626279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.626312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.626332] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.626349] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.636525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.646404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.646472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.646504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.646526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.646543] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.656427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.666367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.666428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.666462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.666484] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.666501] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.676526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.686385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.686450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.686481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.686502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.686519] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.696466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.082 [2024-11-06 15:47:20.706512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.082 [2024-11-06 15:47:20.706583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.082 [2024-11-06 15:47:20.706616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.082 [2024-11-06 15:47:20.706637] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.082 [2024-11-06 15:47:20.706654] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.082 [2024-11-06 15:47:20.716592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.082 qpair failed and we were unable to recover it. 00:42:53.342 [2024-11-06 15:47:20.726525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.726594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.726628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.726650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.726667] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.736760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.746540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.746612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.746646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.746668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.746685] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.756689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.766669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.766744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.766776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.766797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.766814] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.776560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.786731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.786798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.786830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.786852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.786869] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.796936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.806821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.806891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.806923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.806944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.806963] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.816776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.826666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.826732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.826764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.826785] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.826802] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.837032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.846921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.846987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.847024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.847047] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.847064] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.857026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.866972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.867037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.867068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.867089] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.867107] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.877192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.887035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.887107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.887157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.887180] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.887198] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.897236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.907078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.907148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.907181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.907203] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.907220] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.917194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.927233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.927305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.927336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.927358] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.927381] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.937361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.947343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.947412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.947444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.343 [2024-11-06 15:47:20.947464] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.343 [2024-11-06 15:47:20.947482] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.343 [2024-11-06 15:47:20.957435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.343 qpair failed and we were unable to recover it. 00:42:53.343 [2024-11-06 15:47:20.967214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.343 [2024-11-06 15:47:20.967284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.343 [2024-11-06 15:47:20.967317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.344 [2024-11-06 15:47:20.967338] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.344 [2024-11-06 15:47:20.967354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.344 [2024-11-06 15:47:20.977488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.344 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:20.987361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:20.987430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:20.987463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:20.987484] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:20.987500] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:20.997518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.007450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.007524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.007556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.007576] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.007594] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.017504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.027423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.027486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.027517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.027537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.027554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.037545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.047526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.047593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.047625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.047648] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.047665] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.057913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.067582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.067656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.067687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.067708] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.067725] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.077699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.087660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.087739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.087769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.087790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.087807] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.097744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.107714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.107795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.107826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.107847] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.107864] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.117803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.127838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.127910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.127941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.127962] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.127979] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.138087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.147924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.147997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.148029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.148051] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.148067] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.158020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.167867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.167941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.604 [2024-11-06 15:47:21.167973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.604 [2024-11-06 15:47:21.167994] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.604 [2024-11-06 15:47:21.168011] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.604 [2024-11-06 15:47:21.178222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.604 qpair failed and we were unable to recover it. 00:42:53.604 [2024-11-06 15:47:21.188044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.604 [2024-11-06 15:47:21.188117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.605 [2024-11-06 15:47:21.188155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.605 [2024-11-06 15:47:21.188182] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.605 [2024-11-06 15:47:21.188200] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.605 [2024-11-06 15:47:21.198381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.605 qpair failed and we were unable to recover it. 00:42:53.605 [2024-11-06 15:47:21.208807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.605 [2024-11-06 15:47:21.208880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.605 [2024-11-06 15:47:21.208912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.605 [2024-11-06 15:47:21.208933] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.605 [2024-11-06 15:47:21.208950] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.605 [2024-11-06 15:47:21.218478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.605 qpair failed and we were unable to recover it. 00:42:53.605 [2024-11-06 15:47:21.228188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.605 [2024-11-06 15:47:21.228257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.605 [2024-11-06 15:47:21.228287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.605 [2024-11-06 15:47:21.228309] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.605 [2024-11-06 15:47:21.228327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.605 [2024-11-06 15:47:21.238345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.605 qpair failed and we were unable to recover it. 00:42:53.865 [2024-11-06 15:47:21.248220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.865 [2024-11-06 15:47:21.248295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.865 [2024-11-06 15:47:21.248329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.865 [2024-11-06 15:47:21.248350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.248368] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.258359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.268222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.268296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.268330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.268351] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.268374] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.278316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.288344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.288412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.288444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.288465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.288482] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.298453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.308470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.308541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.308573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.308594] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.308612] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.318463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.328609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.328672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.328704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.328726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.328743] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.338593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.348467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.348533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.348562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.348583] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.348600] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.358785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.368586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.368655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.368688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.368710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.368727] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.378923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.388669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.388744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.388776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.388797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.388815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.398904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.408762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.408836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.408869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.408891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.408909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.418930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.428817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.428890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.428922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.428943] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.428960] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.439093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.448812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.448891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.448926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.448947] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.448964] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.459099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.468929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.468997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.469030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.469052] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.469069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.479073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:53.866 [2024-11-06 15:47:21.488924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:53.866 [2024-11-06 15:47:21.488994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:53.866 [2024-11-06 15:47:21.489026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:53.866 [2024-11-06 15:47:21.489048] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:53.866 [2024-11-06 15:47:21.489064] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:53.866 [2024-11-06 15:47:21.498993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:53.866 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.509100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.509180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.509212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.509233] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.509252] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.520683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.529197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.529269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.529301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.529334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.529352] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.539321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.549211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.549282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.549315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.549337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.549354] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.559341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.569319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.569391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.569423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.569444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.569462] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.579395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.589399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.589471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.589502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.589524] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.589541] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.599454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.609426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.609492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.609524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.609545] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.609562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.619418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.629473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.629536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.629567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.629588] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.629608] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.639422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.649468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.649544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.649576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.649597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.649614] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.659518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.669506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.669571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.669604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.669625] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.669642] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.679739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.689641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.689710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.689743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.689764] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.689781] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.699849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.709731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.709800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.709833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.709855] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.709872] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.719676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.729769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.127 [2024-11-06 15:47:21.729845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.127 [2024-11-06 15:47:21.729877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.127 [2024-11-06 15:47:21.729897] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.127 [2024-11-06 15:47:21.729914] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.127 [2024-11-06 15:47:21.739819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.127 qpair failed and we were unable to recover it. 00:42:54.127 [2024-11-06 15:47:21.749732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.128 [2024-11-06 15:47:21.749801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.128 [2024-11-06 15:47:21.749833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.128 [2024-11-06 15:47:21.749854] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.128 [2024-11-06 15:47:21.749871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.128 [2024-11-06 15:47:21.759884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.128 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.769817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.769890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.769921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.769942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.769960] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.779943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.789853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.789932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.789968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.789989] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.790006] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.799979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.809934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.810011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.810042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.810063] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.810080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.820011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.832553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.832628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.832661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.832683] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.832700] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.839986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.849966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.850036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.850069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.850091] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.850108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.860213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.870074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.870162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.870191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.870219] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.870236] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.880276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.890192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.890264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.890295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.890316] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.890332] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.900428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.910167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.910235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.910267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.910289] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.910306] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.920411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.930315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.930385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.930417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.930439] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.930456] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.388 [2024-11-06 15:47:21.940505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.388 qpair failed and we were unable to recover it. 00:42:54.388 [2024-11-06 15:47:21.950325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.388 [2024-11-06 15:47:21.950391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.388 [2024-11-06 15:47:21.950424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.388 [2024-11-06 15:47:21.950445] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.388 [2024-11-06 15:47:21.950463] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.389 [2024-11-06 15:47:21.960349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.389 qpair failed and we were unable to recover it. 00:42:54.389 [2024-11-06 15:47:21.970368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.389 [2024-11-06 15:47:21.970432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.389 [2024-11-06 15:47:21.970465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.389 [2024-11-06 15:47:21.970486] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.389 [2024-11-06 15:47:21.970505] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.389 [2024-11-06 15:47:21.980711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.389 qpair failed and we were unable to recover it. 00:42:54.389 [2024-11-06 15:47:21.990576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.389 [2024-11-06 15:47:21.990648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.389 [2024-11-06 15:47:21.990682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.389 [2024-11-06 15:47:21.990704] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.389 [2024-11-06 15:47:21.990722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.389 [2024-11-06 15:47:22.000568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.389 qpair failed and we were unable to recover it. 00:42:54.389 [2024-11-06 15:47:22.010471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.389 [2024-11-06 15:47:22.010541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.389 [2024-11-06 15:47:22.010573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.389 [2024-11-06 15:47:22.010593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.389 [2024-11-06 15:47:22.010608] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.389 [2024-11-06 15:47:22.020599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.389 qpair failed and we were unable to recover it. 00:42:54.648 [2024-11-06 15:47:22.030562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.648 [2024-11-06 15:47:22.030640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.648 [2024-11-06 15:47:22.030673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.648 [2024-11-06 15:47:22.030694] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.648 [2024-11-06 15:47:22.030711] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.648 [2024-11-06 15:47:22.040779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.648 qpair failed and we were unable to recover it. 00:42:54.648 [2024-11-06 15:47:22.050604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.648 [2024-11-06 15:47:22.050670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.648 [2024-11-06 15:47:22.050701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.648 [2024-11-06 15:47:22.050723] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.648 [2024-11-06 15:47:22.050740] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.648 [2024-11-06 15:47:22.060743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.648 qpair failed and we were unable to recover it. 00:42:54.648 [2024-11-06 15:47:22.070617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.070683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.070714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.070736] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.070753] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.080791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.090733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.090802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.090834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.090856] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.090873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.100772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.110949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.111013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.111043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.111064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.111080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.120842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.130902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.130973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.131010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.131031] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.131048] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.142504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.150925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.150989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.151019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.151040] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.151057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.161240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.170995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.171066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.171098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.171120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.171144] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.181138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.191058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.191134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.191166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.191186] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.191203] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.201232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.211113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.211186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.211217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.211238] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.211261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.221177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.231256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.231322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.231356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.231377] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.231393] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.241348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.251201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.251268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.251301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.251322] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.251339] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.261516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.649 [2024-11-06 15:47:22.271255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.649 [2024-11-06 15:47:22.271323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.649 [2024-11-06 15:47:22.271354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.649 [2024-11-06 15:47:22.271376] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.649 [2024-11-06 15:47:22.271393] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.649 [2024-11-06 15:47:22.281554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.649 qpair failed and we were unable to recover it. 00:42:54.909 [2024-11-06 15:47:22.291323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.909 [2024-11-06 15:47:22.291387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.909 [2024-11-06 15:47:22.291419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.909 [2024-11-06 15:47:22.291440] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.909 [2024-11-06 15:47:22.291459] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.909 [2024-11-06 15:47:22.301487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.909 qpair failed and we were unable to recover it. 00:42:54.909 [2024-11-06 15:47:22.311393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.909 [2024-11-06 15:47:22.311459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.909 [2024-11-06 15:47:22.311492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.909 [2024-11-06 15:47:22.311513] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.909 [2024-11-06 15:47:22.311529] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.909 [2024-11-06 15:47:22.321611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.909 qpair failed and we were unable to recover it. 00:42:54.909 [2024-11-06 15:47:22.331492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.331560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.331593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.331614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.331631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.341596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.351625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.351694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.351726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.351747] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.351765] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.361734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.371499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.371569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.371601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.371622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.371639] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.381879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.391661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.391737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.391768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.391789] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.391806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.401838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.411615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.411687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.411719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.411741] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.411757] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.421797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.431794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.431860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.431894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.431915] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.431932] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.441920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.455213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.455293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.455326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.455348] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.455365] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.461918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.471904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.471973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.472012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.472034] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.472053] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.482148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.491960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.492027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.492059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.492082] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.492099] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.502043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.512028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.512094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.512133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.512154] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.512173] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.522290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:54.910 [2024-11-06 15:47:22.532074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:54.910 [2024-11-06 15:47:22.532162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:54.910 [2024-11-06 15:47:22.532194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:54.910 [2024-11-06 15:47:22.532215] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:54.910 [2024-11-06 15:47:22.532232] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:54.910 [2024-11-06 15:47:22.542264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:54.910 qpair failed and we were unable to recover it. 00:42:55.170 [2024-11-06 15:47:22.552117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:55.170 [2024-11-06 15:47:22.552199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:55.170 [2024-11-06 15:47:22.552234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:55.170 [2024-11-06 15:47:22.552256] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:55.170 [2024-11-06 15:47:22.552280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:42:55.170 [2024-11-06 15:47:22.562445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:42:55.170 qpair failed and we were unable to recover it. 00:42:55.170 [2024-11-06 15:47:22.562723] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:42:55.170 A controller has encountered a failure and is being reset. 00:42:55.170 [2024-11-06 15:47:22.572826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:55.170 [2024-11-06 15:47:22.572939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:55.170 [2024-11-06 15:47:22.573026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:55.170 [2024-11-06 15:47:22.573076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:55.170 [2024-11-06 15:47:22.573123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:42:55.170 [2024-11-06 15:47:22.582649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:42:55.170 qpair failed and we were unable to recover it. 00:42:55.170 [2024-11-06 15:47:22.592497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:42:55.170 [2024-11-06 15:47:22.592577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:42:55.170 [2024-11-06 15:47:22.592619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:42:55.170 [2024-11-06 15:47:22.592650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:42:55.170 [2024-11-06 15:47:22.592673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:42:55.170 [2024-11-06 15:47:22.602456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:42:55.170 qpair failed and we were unable to recover it. 00:42:55.170 [2024-11-06 15:47:22.602891] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:42:55.170 [2024-11-06 15:47:22.647180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:42:55.170 Controller properly reset. 00:42:55.429 Initializing NVMe Controllers 00:42:55.429 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:42:55.429 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:42:55.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:42:55.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:42:55.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:42:55.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:42:55.429 Initialization complete. Launching workers. 00:42:55.429 Starting thread on core 1 00:42:55.429 Starting thread on core 2 00:42:55.429 Starting thread on core 0 00:42:55.429 Starting thread on core 3 00:42:55.429 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:42:55.429 00:42:55.430 real 0m12.151s 00:42:55.430 user 0m26.505s 00:42:55.430 sys 0m2.811s 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:42:55.430 ************************************ 00:42:55.430 END TEST nvmf_target_disconnect_tc2 00:42:55.430 ************************************ 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:55.430 15:47:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:42:55.430 ************************************ 00:42:55.430 START TEST nvmf_target_disconnect_tc3 00:42:55.430 ************************************ 00:42:55.430 15:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc3 00:42:55.430 15:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3336395 00:42:55.430 15:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:42:55.430 15:47:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:42:57.968 15:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3335449 00:42:57.968 15:47:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Write completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 Read completed with error (sct=0, sc=8) 00:42:58.907 starting I/O failed 00:42:58.907 [2024-11-06 15:47:26.331842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:42:59.476 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3335449 Killed "${NVMF_APP[@]}" "$@" 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3336889 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3336889 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3336889 ']' 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:59.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:59.476 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:42:59.735 [2024-11-06 15:47:27.148844] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:59.735 [2024-11-06 15:47:27.148949] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:59.735 [2024-11-06 15:47:27.310009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Write completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 Read completed with error (sct=0, sc=8) 00:42:59.735 starting I/O failed 00:42:59.735 [2024-11-06 15:47:27.337286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:42:59.994 [2024-11-06 15:47:27.421885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:59.994 [2024-11-06 15:47:27.421941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:59.994 [2024-11-06 15:47:27.421954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:59.994 [2024-11-06 15:47:27.421968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:59.994 [2024-11-06 15:47:27.421978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:59.994 [2024-11-06 15:47:27.424350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:59.994 [2024-11-06 15:47:27.424440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:59.994 [2024-11-06 15:47:27.424505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:59.994 [2024-11-06 15:47:27.424529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:43:00.563 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:00.563 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@866 -- # return 0 00:43:00.563 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:00.563 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:00.563 15:47:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.563 Malloc0 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.563 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.563 [2024-11-06 15:47:28.127608] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f26dad1a940) succeed. 00:43:00.563 [2024-11-06 15:47:28.137660] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f26dabbd940) succeed. 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Read completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.822 Write completed with error (sct=0, sc=8) 00:43:00.822 starting I/O failed 00:43:00.823 Write completed with error (sct=0, sc=8) 00:43:00.823 starting I/O failed 00:43:00.823 Write completed with error (sct=0, sc=8) 00:43:00.823 starting I/O failed 00:43:00.823 Read completed with error (sct=0, sc=8) 00:43:00.823 starting I/O failed 00:43:00.823 Write completed with error (sct=0, sc=8) 00:43:00.823 starting I/O failed 00:43:00.823 Write completed with error (sct=0, sc=8) 00:43:00.823 starting I/O failed 00:43:00.823 [2024-11-06 15:47:28.342976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:00.823 [2024-11-06 15:47:28.344808] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:00.823 [2024-11-06 15:47:28.344843] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:00.823 [2024-11-06 15:47:28.344856] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.823 [2024-11-06 15:47:28.440850] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.823 15:47:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3336395 00:43:01.761 [2024-11-06 15:47:29.348867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:01.761 qpair failed and we were unable to recover it. 00:43:01.761 [2024-11-06 15:47:29.350630] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:01.761 [2024-11-06 15:47:29.350662] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:01.761 [2024-11-06 15:47:29.350675] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:03.138 [2024-11-06 15:47:30.354543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:03.138 qpair failed and we were unable to recover it. 00:43:03.138 [2024-11-06 15:47:30.356294] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:03.138 [2024-11-06 15:47:30.356327] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:03.138 [2024-11-06 15:47:30.356340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:04.076 [2024-11-06 15:47:31.360303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:04.076 qpair failed and we were unable to recover it. 00:43:04.076 [2024-11-06 15:47:31.362059] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:04.076 [2024-11-06 15:47:31.362095] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:04.076 [2024-11-06 15:47:31.362108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:05.014 [2024-11-06 15:47:32.366109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:05.014 qpair failed and we were unable to recover it. 00:43:05.014 [2024-11-06 15:47:32.367847] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:05.014 [2024-11-06 15:47:32.367883] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:05.014 [2024-11-06 15:47:32.367896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:05.952 [2024-11-06 15:47:33.371925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:05.952 qpair failed and we were unable to recover it. 00:43:05.952 [2024-11-06 15:47:33.373670] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:05.952 [2024-11-06 15:47:33.373701] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:05.952 [2024-11-06 15:47:33.373714] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:43:06.889 [2024-11-06 15:47:34.377736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:43:06.889 qpair failed and we were unable to recover it. 00:43:06.889 [2024-11-06 15:47:34.380511] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:06.889 [2024-11-06 15:47:34.380603] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:06.889 [2024-11-06 15:47:34.380643] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:43:07.826 [2024-11-06 15:47:35.384674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:43:07.826 qpair failed and we were unable to recover it. 00:43:07.826 [2024-11-06 15:47:35.386329] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:07.826 [2024-11-06 15:47:35.386361] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:07.826 [2024-11-06 15:47:35.386375] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:43:08.763 [2024-11-06 15:47:36.390284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:43:08.763 qpair failed and we were unable to recover it. 00:43:08.763 [2024-11-06 15:47:36.393077] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:08.763 [2024-11-06 15:47:36.393176] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:08.763 [2024-11-06 15:47:36.393217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:43:10.144 [2024-11-06 15:47:37.397312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:43:10.144 qpair failed and we were unable to recover it. 00:43:10.144 [2024-11-06 15:47:37.398987] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:10.144 [2024-11-06 15:47:37.399019] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:10.144 [2024-11-06 15:47:37.399032] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:43:11.081 [2024-11-06 15:47:38.402958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:43:11.082 qpair failed and we were unable to recover it. 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Read completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.019 Write completed with error (sct=0, sc=8) 00:43:12.019 starting I/O failed 00:43:12.020 Write completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Read completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Read completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Write completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Read completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Write completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Write completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Write completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Read completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 Read completed with error (sct=0, sc=8) 00:43:12.020 starting I/O failed 00:43:12.020 [2024-11-06 15:47:39.408850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:43:12.020 [2024-11-06 15:47:39.410647] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:12.020 [2024-11-06 15:47:39.410683] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:12.020 [2024-11-06 15:47:39.410700] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:43:12.956 [2024-11-06 15:47:40.414716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:43:12.956 qpair failed and we were unable to recover it. 00:43:12.956 [2024-11-06 15:47:40.416672] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:43:12.956 [2024-11-06 15:47:40.416706] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:43:12.957 [2024-11-06 15:47:40.416724] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:43:13.902 [2024-11-06 15:47:41.420882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:43:13.902 qpair failed and we were unable to recover it. 00:43:13.902 [2024-11-06 15:47:41.421200] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:43:13.902 A controller has encountered a failure and is being reset. 00:43:13.902 Resorting to new failover address 192.168.100.9 00:43:13.902 [2024-11-06 15:47:41.421330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:43:13.902 [2024-11-06 15:47:41.421430] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:43:13.902 [2024-11-06 15:47:41.463485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:43:13.902 Controller properly reset. 00:43:14.162 Initializing NVMe Controllers 00:43:14.162 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:43:14.162 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:43:14.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:43:14.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:43:14.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:43:14.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:43:14.162 Initialization complete. Launching workers. 00:43:14.162 Starting thread on core 1 00:43:14.162 Starting thread on core 2 00:43:14.162 Starting thread on core 0 00:43:14.162 Starting thread on core 3 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:43:14.162 00:43:14.162 real 0m18.697s 00:43:14.162 user 1m0.855s 00:43:14.162 sys 0m4.729s 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:43:14.162 ************************************ 00:43:14.162 END TEST nvmf_target_disconnect_tc3 00:43:14.162 ************************************ 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:14.162 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:43:14.162 rmmod nvme_rdma 00:43:14.422 rmmod nvme_fabrics 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3336889 ']' 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3336889 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3336889 ']' 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3336889 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3336889 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3336889' 00:43:14.422 killing process with pid 3336889 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3336889 00:43:14.422 15:47:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3336889 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:43:16.330 00:43:16.330 real 0m41.588s 00:43:16.330 user 2m33.598s 00:43:16.330 sys 0m13.872s 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:43:16.330 ************************************ 00:43:16.330 END TEST nvmf_target_disconnect 00:43:16.330 ************************************ 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:16.330 00:43:16.330 real 13m10.939s 00:43:16.330 user 41m6.941s 00:43:16.330 sys 2m33.233s 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:16.330 15:47:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:16.330 ************************************ 00:43:16.330 END TEST nvmf_host 00:43:16.330 ************************************ 00:43:16.330 15:47:43 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:43:16.330 00:43:16.330 real 35m8.385s 00:43:16.330 user 104m37.307s 00:43:16.330 sys 7m46.019s 00:43:16.330 15:47:43 nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:16.330 15:47:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:16.330 ************************************ 00:43:16.330 END TEST nvmf_rdma 00:43:16.330 ************************************ 00:43:16.330 15:47:43 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:43:16.330 15:47:43 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:43:16.330 15:47:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:16.330 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:43:16.330 ************************************ 00:43:16.330 START TEST spdkcli_nvmf_rdma 00:43:16.330 ************************************ 00:43:16.330 15:47:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:43:16.591 * Looking for test storage... 00:43:16.591 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.591 --rc genhtml_branch_coverage=1 00:43:16.591 --rc genhtml_function_coverage=1 00:43:16.591 --rc genhtml_legend=1 00:43:16.591 --rc geninfo_all_blocks=1 00:43:16.591 --rc geninfo_unexecuted_blocks=1 00:43:16.591 00:43:16.591 ' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.591 --rc genhtml_branch_coverage=1 00:43:16.591 --rc genhtml_function_coverage=1 00:43:16.591 --rc genhtml_legend=1 00:43:16.591 --rc geninfo_all_blocks=1 00:43:16.591 --rc geninfo_unexecuted_blocks=1 00:43:16.591 00:43:16.591 ' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.591 --rc genhtml_branch_coverage=1 00:43:16.591 --rc genhtml_function_coverage=1 00:43:16.591 --rc genhtml_legend=1 00:43:16.591 --rc geninfo_all_blocks=1 00:43:16.591 --rc geninfo_unexecuted_blocks=1 00:43:16.591 00:43:16.591 ' 00:43:16.591 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.591 --rc genhtml_branch_coverage=1 00:43:16.591 --rc genhtml_function_coverage=1 00:43:16.591 --rc genhtml_legend=1 00:43:16.592 --rc geninfo_all_blocks=1 00:43:16.592 --rc geninfo_unexecuted_blocks=1 00:43:16.592 00:43:16.592 ' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:16.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3339078 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3339078 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # '[' -z 3339078 ']' 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:16.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:16.592 15:47:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:16.852 [2024-11-06 15:47:44.295007] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:43:16.852 [2024-11-06 15:47:44.295119] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339078 ] 00:43:16.852 [2024-11-06 15:47:44.447847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:17.112 [2024-11-06 15:47:44.558192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.112 [2024-11-06 15:47:44.558218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@866 -- # return 0 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:43:17.682 15:47:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:24.258 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:43:24.259 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:43:24.259 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:43:24.259 Found net devices under 0000:18:00.0: mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:43:24.259 Found net devices under 0000:18:00.1: mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:43:24.259 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:43:24.259 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:43:24.259 altname enp24s0f0np0 00:43:24.259 altname ens785f0np0 00:43:24.259 inet 192.168.100.8/24 scope global mlx_0_0 00:43:24.259 valid_lft forever preferred_lft forever 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:43:24.259 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:43:24.259 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:43:24.259 altname enp24s0f1np1 00:43:24.259 altname ens785f1np1 00:43:24.259 inet 192.168.100.9/24 scope global mlx_0_1 00:43:24.259 valid_lft forever preferred_lft forever 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:43:24.259 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:43:24.260 192.168.100.9' 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:43:24.260 192.168.100.9' 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:43:24.260 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:43:24.520 192.168.100.9' 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:24.520 15:47:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:24.520 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:24.520 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:24.520 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:24.520 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:24.520 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:24.520 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:24.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:43:24.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:43:24.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:24.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:24.520 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:24.520 ' 00:43:27.815 [2024-11-06 15:47:54.827756] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002b840/0x7fda0e031940) succeed. 00:43:27.815 [2024-11-06 15:47:54.837903] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b9c0/0x7fda0dca6940) succeed. 00:43:28.754 [2024-11-06 15:47:56.309755] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:43:31.292 [2024-11-06 15:47:58.793977] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:43:33.829 [2024-11-06 15:48:00.973354] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:43:35.210 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:43:35.210 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:43:35.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:43:35.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:43:35.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:43:35.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:43:35.210 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:43:35.210 15:48:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:35.780 15:48:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:43:35.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:43:35.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:35.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:43:35.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:43:35.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:43:35.780 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:43:35.780 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:35.780 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:35.780 ' 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:43:42.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:43:42.403 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:42.403 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:42.403 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3339078 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' -z 3339078 ']' 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # kill -0 3339078 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # uname 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3339078 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:42.403 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:42.404 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3339078' 00:43:42.404 killing process with pid 3339078 00:43:42.404 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@971 -- # kill 3339078 00:43:42.404 15:48:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@976 -- # wait 3339078 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:43.352 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:43:43.352 rmmod nvme_rdma 00:43:43.352 rmmod nvme_fabrics 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:43:43.612 00:43:43.612 real 0m27.039s 00:43:43.612 user 0m58.559s 00:43:43.612 sys 0m6.463s 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:43.612 15:48:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:43:43.612 ************************************ 00:43:43.612 END TEST spdkcli_nvmf_rdma 00:43:43.612 ************************************ 00:43:43.612 15:48:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:43.612 15:48:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:43.612 15:48:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:43.612 15:48:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:43.612 15:48:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:43.612 15:48:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:43.612 15:48:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:43.612 15:48:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:43.612 15:48:11 -- common/autotest_common.sh@10 -- # set +x 00:43:43.612 15:48:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:43.612 15:48:11 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:43:43.612 15:48:11 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:43:43.612 15:48:11 -- common/autotest_common.sh@10 -- # set +x 00:43:43.612 ##### CORE BT nvmf_tgt_3314748.core.bt.txt ##### 00:43:43.612 00:43:43.612 gdb: warning: Couldn't determine a path for the index cache directory. 00:43:43.612 [New LWP 3314748] 00:43:43.612 [New LWP 3314797] 00:43:43.612 [Thread debugging using libthread_db enabled] 00:43:43.612 Using host libthread_db library "/usr/lib64/libthread_db.so.1". 00:43:43.612 Core was generated by `/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF'. 00:43:43.612 Program terminated with signal SIGABRT, Aborted. 00:43:43.612 #0 0x00007f0cffb7a834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:43:43.612 [Current thread is 1 (Thread 0x7f0cfe90aa40 (LWP 3314748))] 00:43:43.612 00:43:43.612 Thread 2 (Thread 0x7f0cfb4006c0 (LWP 3314797)): 00:43:43.612 #0 0x00007f0cffbffe62 in epoll_wait () from /usr/lib64/libc.so.6 00:43:43.612 No symbol table info available. 00:43:43.612 #1 0x00007f0d00838e4d in eal_intr_handle_interrupts (pfd=7, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:43:43.612 events = {{events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}}} 00:43:43.612 nfds = -1 00:43:43.612 #2 0x00007f0d00839335 in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:43:43.612 pipe_event = {events = 3, data = {ptr = 0x5, fd = 5, u32 = 5, u64 = 5}} 00:43:43.612 src = 0x0 00:43:43.612 numfds = 1 00:43:43.612 pfd = 7 00:43:43.612 __func__ = "eal_intr_thread_main" 00:43:43.612 #3 0x00007f0d007ed01a in control_thread_start (arg=0x60300002eff0) at ../lib/eal/common/eal_common_thread.c:282 00:43:43.612 params = 0x60300002eff0 00:43:43.612 start_arg = 0x0 00:43:43.612 start_routine = 0x7f0d00838f21 00:43:43.613 #4 0x00007f0d00823446 in thread_start_wrapper (arg=0x7f0cfc9096a0) at ../lib/eal/unix/rte_thread.c:114 00:43:43.613 ctx = 0x7f0cfc9096a0 00:43:43.613 thread_func = 0x7f0d007ecf81 00:43:43.613 thread_args = 0x60300002eff0 00:43:43.613 ret = 0 00:43:43.613 #5 0x00007f0cffb78897 in start_thread () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 #6 0x00007f0cffbffa5c in clone3 () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 00:43:43.613 Thread 1 (Thread 0x7f0cfe90aa40 (LWP 3314748)): 00:43:43.613 #0 0x00007f0cffb7a834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 #1 0x00007f0cffb288ee in raise () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 #2 0x00007f0cffb108ff in abort () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 #3 0x00007f0d04936f0f in __sanitizer::Abort() () from /usr/lib64/libasan.so.8 00:43:43.613 No symbol table info available. 00:43:43.613 #4 0x00007f0d04946401 in __sanitizer::Die() () from /usr/lib64/libasan.so.8 00:43:43.613 No symbol table info available. 00:43:43.613 #5 0x00007f0d0494f7cc in __lsan::HandleLeaks() () from /usr/lib64/libasan.so.8 00:43:43.613 No symbol table info available. 00:43:43.613 #6 0x00007f0d0494dc75 in __lsan::DoLeakCheck() () from /usr/lib64/libasan.so.8 00:43:43.613 No symbol table info available. 00:43:43.613 #7 0x00007f0cffb2aa2d in __cxa_finalize () from /usr/lib64/libc.so.6 00:43:43.613 No symbol table info available. 00:43:43.613 #8 0x00007f0d04865927 in __do_global_dtors_aux () from /usr/lib64/libasan.so.8 00:43:43.613 No symbol table info available. 00:43:43.613 #9 0x00007ffe4e94ec70 in ?? () 00:43:43.613 No symbol table info available. 00:43:43.613 #10 0x00007f0d04eeb0f2 in _dl_call_fini (closure_map=0x7f0d04ee85a0) at dl-call_fini.c:43 00:43:43.613 array = 0x7f0d049a0778 00:43:43.613 sz = 00:43:43.613 map = 0x7f0d04ee85a0 00:43:43.613 fini_array = 00:43:43.613 fini = 00:43:43.613 Backtrace stopped: frame did not save the PC 00:43:43.613 00:43:43.613 -- 00:43:48.895 INFO: APP EXITING 00:43:48.895 INFO: killing all VMs 00:43:48.895 INFO: killing vhost app 00:43:48.895 INFO: EXIT DONE 00:43:51.435 Waiting for block devices as requested 00:43:51.435 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:43:51.435 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:51.695 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:51.695 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:51.695 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:51.955 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:51.955 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:51.955 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:52.213 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:52.213 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:52.214 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:52.473 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:52.473 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:52.473 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:52.732 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:52.732 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:52.732 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:56.027 Cleaning 00:43:56.027 Removing: /var/run/dpdk/spdk0/config 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:56.027 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:56.027 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:56.027 Removing: /var/run/dpdk/spdk1/config 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:56.027 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:56.287 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:56.287 Removing: /var/run/dpdk/spdk1/mp_socket 00:43:56.287 Removing: /var/run/dpdk/spdk2/config 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:56.287 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:56.287 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:56.287 Removing: /var/run/dpdk/spdk3/config 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:56.287 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:56.287 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:56.287 Removing: /var/run/dpdk/spdk4/config 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:56.287 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:56.287 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:56.287 Removing: /dev/shm/bdevperf_trace.pid2985203 00:43:56.287 Removing: /dev/shm/bdev_svc_trace.1 00:43:56.287 Removing: /dev/shm/nvmf_trace.0 00:43:56.287 Removing: /dev/shm/spdk_tgt_trace.pid2937534 00:43:56.287 Removing: /var/run/dpdk/spdk0 00:43:56.287 Removing: /var/run/dpdk/spdk1 00:43:56.287 Removing: /var/run/dpdk/spdk2 00:43:56.287 Removing: /var/run/dpdk/spdk3 00:43:56.287 Removing: /var/run/dpdk/spdk4 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2932948 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2934863 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2937534 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2938326 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2939445 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2939831 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2940971 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2941159 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2941661 00:43:56.287 Removing: /var/run/dpdk/spdk_pid2946763 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2948666 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2949379 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2949988 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2950608 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2951192 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2951430 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2951638 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2951879 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2952791 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2955574 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2956188 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2956713 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2956793 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2958257 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2958311 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2960268 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2960450 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2960870 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2961049 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2961548 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2961638 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2962856 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2963170 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2963474 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2967416 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2971343 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2980051 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2980771 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2985203 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2985406 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2989453 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2994799 00:43:56.548 Removing: /var/run/dpdk/spdk_pid2997691 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3007154 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3029492 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3033214 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3113223 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3117805 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3122876 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3131351 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3157818 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3161839 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3198113 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3199595 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3201142 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3202577 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3206802 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3212078 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3218839 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3219656 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3220532 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3221383 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3221744 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3225948 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3226012 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3230272 00:43:56.548 Removing: /var/run/dpdk/spdk_pid3231079 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3231600 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3232286 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3232301 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3241308 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3242758 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3244208 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3245650 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3247093 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3248534 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3254545 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3255124 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3264069 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3269018 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3293691 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3295922 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3301067 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3310452 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3310461 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3329010 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3329293 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3334502 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3334903 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3336395 00:43:56.808 Removing: /var/run/dpdk/spdk_pid3339078 00:43:56.808 Clean 00:43:58.187 15:48:25 -- common/autotest_common.sh@1451 -- # return 1 00:43:58.188 15:48:25 -- spdk/autotest.sh@384 -- # trap - ERR 00:43:58.188 15:48:25 -- spdk/autotest.sh@384 -- # print_backtrace 00:43:58.188 15:48:25 -- common/autotest_common.sh@1155 -- # [[ ehxBET =~ e ]] 00:43:58.188 15:48:25 -- common/autotest_common.sh@1157 -- # args=('/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf') 00:43:58.188 15:48:25 -- common/autotest_common.sh@1157 -- # local args 00:43:58.188 15:48:25 -- common/autotest_common.sh@1159 -- # xtrace_disable 00:43:58.188 15:48:25 -- common/autotest_common.sh@10 -- # set +x 00:43:58.188 ========== Backtrace start: ========== 00:43:58.188 00:43:58.188 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh:384 -> main(["/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf"]) 00:43:58.188 ... 00:43:58.188 379 fi 00:43:58.188 380 00:43:58.188 381 trap - SIGINT SIGTERM EXIT 00:43:58.188 382 00:43:58.188 383 timing_enter post_cleanup 00:43:58.188 => 384 autotest_cleanup 00:43:58.188 385 timing_exit post_cleanup 00:43:58.188 386 00:43:58.188 387 timing_exit autotest 00:43:58.188 388 chmod a+r $output_dir/timing.txt 00:43:58.188 389 00:43:58.188 ... 00:43:58.188 00:43:58.188 ========== Backtrace end ========== 00:43:58.188 15:48:25 -- common/autotest_common.sh@1196 -- # return 0 00:43:58.188 15:48:25 -- spdk/autorun.sh@27 -- $ trap - ERR 00:43:58.188 15:48:25 -- spdk/autorun.sh@27 -- $ print_backtrace 00:43:58.188 15:48:25 -- common/autotest_common.sh@1155 -- $ [[ ehxBET =~ e ]] 00:43:58.188 15:48:25 -- common/autotest_common.sh@1157 -- $ args=('/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf') 00:43:58.188 15:48:25 -- common/autotest_common.sh@1157 -- $ local args 00:43:58.188 15:48:25 -- common/autotest_common.sh@1159 -- $ xtrace_disable 00:43:58.188 15:48:25 -- common/autotest_common.sh@10 -- $ set +x 00:43:58.188 ========== Backtrace start: ========== 00:43:58.188 00:43:58.188 in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf"]) 00:43:58.188 ... 00:43:58.188 22 trap 'timing_finish || exit 1' EXIT 00:43:58.188 23 00:43:58.188 24 # Runs agent scripts 00:43:58.188 25 $rootdir/autobuild.sh "$conf" 00:43:58.188 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:43:58.188 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:43:58.188 28 fi 00:43:58.188 ... 00:43:58.188 00:43:58.188 ========== Backtrace end ========== 00:43:58.188 15:48:25 -- common/autotest_common.sh@1196 -- $ return 0 00:43:58.188 15:48:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:58.188 15:48:25 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:43:58.188 15:48:25 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:58.188 15:48:25 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:58.188 15:48:25 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:43:58.461 [Pipeline] } 00:43:58.478 [Pipeline] // stage 00:43:58.486 [Pipeline] } 00:43:58.503 [Pipeline] // timeout 00:43:58.510 [Pipeline] } 00:43:58.514 ERROR: script returned exit code 1 00:43:58.514 Setting overall build result to FAILURE 00:43:58.528 [Pipeline] // catchError 00:43:58.533 [Pipeline] } 00:43:58.548 [Pipeline] // wrap 00:43:58.554 [Pipeline] } 00:43:58.568 [Pipeline] // catchError 00:43:58.577 [Pipeline] stage 00:43:58.580 [Pipeline] { (Epilogue) 00:43:58.593 [Pipeline] catchError 00:43:58.595 [Pipeline] { 00:43:58.608 [Pipeline] echo 00:43:58.610 Cleanup processes 00:43:58.616 [Pipeline] sh 00:43:58.908 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:43:58.908 2919759 sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:43:58.908 2919791 bash /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902091 00:43:58.908 3350371 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:43:58.922 [Pipeline] sh 00:43:59.210 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:43:59.210 ++ grep -v 'sudo pgrep' 00:43:59.210 ++ awk '{print $1}' 00:43:59.210 + sudo kill -9 2919759 2919791 00:43:59.223 [Pipeline] sh 00:43:59.511 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:06.099 [Pipeline] sh 00:44:06.386 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:06.386 Artifacts sizes are good 00:44:06.401 [Pipeline] archiveArtifacts 00:44:06.408 Archiving artifacts 00:44:07.947 [Pipeline] sh 00:44:08.232 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:44:08.248 [Pipeline] cleanWs 00:44:08.258 [WS-CLEANUP] Deleting project workspace... 00:44:08.258 [WS-CLEANUP] Deferred wipeout is used... 00:44:08.265 [WS-CLEANUP] done 00:44:08.267 [Pipeline] } 00:44:08.284 [Pipeline] // catchError 00:44:08.295 [Pipeline] echo 00:44:08.297 Tests finished with errors. Please check the logs for more info. 00:44:08.300 [Pipeline] echo 00:44:08.302 Execution node will be rebooted. 00:44:08.317 [Pipeline] build 00:44:08.320 Scheduling project: reset-job 00:44:08.333 [Pipeline] sh 00:44:08.618 + logger -p user.err -t JENKINS-CI 00:44:08.628 [Pipeline] } 00:44:08.640 [Pipeline] // stage 00:44:08.645 [Pipeline] } 00:44:08.659 [Pipeline] // node 00:44:08.664 [Pipeline] End of Pipeline 00:44:08.699 Finished: FAILURE